Facebook, Google, Apple heads visit Europe, nervous about new AI rules

Facebook's Mark Zuckerberg and Google's Sundar Pichai have journeyed to Brussels as the European Union drafts regulation for artificial intelligence and the digital economy, first-of-its-kind rules on the ways that the technology can be used by companies

By Adam Satariano
Published: Feb 17, 2020

Mark Zuckerberg, the chief executive of Facebook, in Palo Alto, Calif., April 11, 2019. Zuckerberg is to meet with Margrethe Vestager, the executive vice president of the European Commission who is coordinating the EU's artificial intelligence policy, on Feb. 17. (Jessica Chou/The New York Times)

LONDON — First came Sundar Pichai, the chief executive of Google’s parent company, Alphabet. Then Apple’s senior vice president for artificial intelligence, John Giannandrea, showed up.

And Monday, Mark Zuckerberg, Facebook’s chief executive, is joining in with his own trip to Brussels to meet with officials like Margrethe Vestager, the executive vice president of the European Commission.

The main reason so many Silicon Valley executives are paying court in the European Union’s capital: EU lawmakers are debating a new digital policy, including first-of-its-kind rules on the ways that artificial intelligence can be used by companies. That has far-reaching implications for many industries — but especially for tech behemoths like Google, Facebook and Apple that have bet big on artificial intelligence.

“While AI promises enormous benefits for Europe and the world, there are real concerns about the potential negative consequences,” Pichai said in a speech last month when he visited Brussels. He said regulation of AI was needed to ensure proper human oversight, but added “there is a balance to be had” to ensure that rules do not stifle innovation.

Silicon Valley executives are taking action as Europe has increasingly set the standard on tech policy and regulation. In recent years, the EU has passed laws on digital privacy and penalized Google and others on antitrust matters, which has inspired tougher action elsewhere in the world. The new AI policy is also likely to be a template that others will adopt.

Artificial intelligence — where machines are being trained to learn how to perform jobs on their own — is seen by technologists, business leaders and government officials as one of the world’s most transformative technologies. Yet it presents new risks to individual privacy and livelihoods — including the possibility that the tech will replace people in their jobs.

A first draft of the AI policy, which is being coordinated by Vestager, will be released Wednesday, along with broader recommendations outlining the bloc’s digital strategy for the coming years. The debate over the policies, including how to expand Europe’s homegrown tech industry, is expected to last through 2020.

The AI proposal is expected to outline riskier uses of the technology — such as in health care and transportation like self-driving cars — and how those will come under tougher government scrutiny.

In an interview, Vestager said AI was one of the world’s most promising technologies, but it presents many dangers because it requires trusting complex algorithms to make decisions based on vast amounts of data. She said there must be privacy protections, rules to prevent the technology from causing discrimination, and requirements that ensure companies using the systems can explain how they work, she said.

Europe is working on the AI policy at the direction of Ursula von der Leyen, the new head of the European Commission, which is the executive branch for the 27-nation bloc. Von der Leyen, who took office in November, immediately gave Vestager a 100-day deadline to release an initial proposal about AI.

The tight time frame has raised concerns that the rules are being rushed. Artificial intelligence is not monolithic and its use varies depending on the field where it is being applied. Its effectiveness largely relies on data pulled from different sources. Overly broad regulations could stand in the way of the benefits, such as diagnosing disease, building self-driving vehicles or creating more efficient energy grids, some in the tech industry warned.

“There is an opportunity for leadership, but it cannot just be regulatory work,” said Ian Hogarth, a London-based angel investor who focuses on AI. “Just looking at this through the lens of regulations makes it hard to push the frontiers of what’s possible.”

Europe’s AI debate is part of a broader move away from an American-led view of technology. For years, U.S. lawmakers and regulators largely left Silicon Valley companies alone, allowing the firms to grow unimpeded and with little scrutiny of problems such as the spread of disinformation on social networks.

Policymakers in Europe and elsewhere stepped in with a more hands-on approach, setting boundaries on privacy, antitrust and harmful internet content. Last week, Britain unveiled plans to create a new government regulator to oversee internet content.

“Technology is fragmenting along geopolitical lines,” said professor Wendy Hall, a computer scientist at Southampton University, who has been an adviser to the British government on AI.

In the interview, Vestager compared Europe’s more assertive stance in tech regulation to its regulations of agriculture. Many pesticides and chemicals that are allowed in the U.S. are banned in Europe.

“It is quite the European approach to say if things are risky, then we as a society want to regulate this,” she said. “The main thing is for us to create societies where people feel that they can trust what is going on.”

European policymakers have no shortage of ideas to wade through in drafting the AI policy. Since 2018, 44 reports with recommendations for “ethical artificial intelligence” have been published by various organizations, according to a PricewaterhouseCoopers report.

The rules will have important consequences for Apple, Facebook and Google. The tech giants have invested heavily in AI in recent years and have battled to hire the world’s top engineers. Artificial intelligence is now in Apple products such as Siri and Face ID, helps power Google’s search engine and self-driving cars, and Facebook’s advertising business.

Apple declined to comment. A Google spokesman referred back to the comments made by Pichai.

During Pichai’s visit to Brussels last month, he was asked what the region’s AI rules should look like. He warned there would be lasting economic consequences if Europe did too much.

“The ability of European industry to adopt and adapt AI for its needs is going to be very critical for the continent’s future,” he said. “It’s important to keep that in mind.”

©2019 New York Times News Service

X