In an interview with MLex, Margrethe Vestager cautioned against a “whatever it takes” approach to supporting the nascent industry, stressing that EU rules on AI systems and powers to police mergers could land a glove on areas of concern.
“I think there’s a lot of potential in AI, also for gains in productivity, but I think we do have time to do it in a proper way,” the executive vice-president of the European Commission said.
Next month, Vestager will likely end her time as commissioner, drawing a line under a decade in Brussels where she investigated companies such as Google and Microsoft, as well as crafting new rules for cybersecurity, online services and digital markets.
As she readies to leave, the AI revolution is gathering pace, posing questions for antitrust enforcers as well as the EU’s lawmakers, who agreed the world’s first comprehensive AI legislation that entered into force in August.
Her warning is stark: “The risk is that if we are lazy, we are just the meat to the machine. We will not be the decision makers any more.”
In recent weeks, tech companies have been touring Brussels and national capitals, pleading with politicians not to shackle the AI industry just as it gets going. They point to AI as an engine for much-needed growth, and if Europe stifles that, then the continent will fall even further behind the US and China, they argue.
Vestager said any curbs on European growth were more down to the lack of a capital market than an excess of regulation. “We still have a lot of fragmentation in the single market and we need to solve that as a matter of priority,” she said.
High-profile reports by former Italian prime ministers Enrico Letta and Mario Draghi have highlighted how the EU’s single market remains highly fragmented, with barriers in telecommunications and capital markets limiting EU companies' ability to expand and compete internationally.
In an open letter in September, coordinated by Meta Platforms, European researchers and companies including Ericsson, Spotify and SAP warned that the EU’s “fragmented and inconsistent” regulation of data privacy and AI put the bloc at risk of slipping further behind in AI development. They criticized EU privacy watchdogs for generating “huge uncertainty” about what kinds of data can be used to train AI models.
But Vestager said it would be a mistake for regulators to vacate the field just so AI can continue its stratospheric rise.
“I think it’s really, really short-sighted to say: ‘whatever it takes, we just want AI to prevail,’” she said, alluding to Draghi's famous intervention in 2012, while he was heading the European Central Bank, to step in and save the euro during the sovereign debt crisis.
Vestager pointed to copyright laws and the need to respect the work of artists, authors, musicians and journalists: “Would you really want to sell out entire sectors for the benefit of technology? I think that’s the wrong balance.”
— US dominance —
The fear among some regulators is that the AI revolution will simply lead to a further entrenchment of market power for the current crop of technology titans such as Microsoft, Google and Meta Platforms.
But Vestager stressed that Europe had its own competitive advantage in its “industrial culture” and large public sectors. If AI can make those two slices of the economy “deliver better in a way that people trust” — including privacy and safety — then “we have created a market that equals nothing on this planet, and which doesn’t exist today.”
Europe isn’t alone in its desire to ensure that AI develops in a safe and respectful manner, she said. The key is ensuring companies and governments quickly arrive at compromises over the shape of regulation without one “steamrollering” the other.
The EU’s landmark AI Act — a legal framework regulating AI based on its capacity to cause harm — is an “excellent start” to overseeing the sector and doesn’t yet require changes, the commissioner said. After it took effect on Aug. 1, the regulation’s obligations will be phased in gradually over the next three years.
This contrasts with the mix-and-match approach across the US, which has “increased fragmentation” in areas such as privacy. While the US splits privacy regulation between the state and federal levels, and lacks a comprehensive federal privacy law, the EU has a single regulation that applies to all 27 member states.
On top of regulation, Europe is engaged with partners on a voluntary code of conduct proposed by Group of Seven countries last October. This means companies commit to focus on risk mitigation, transparency, incident reporting, watermarking for AI-generated content, and adopting international technical standards (see here).
All this means that “Europe is still way ahead of the curve compared to the rest of the world,” Vestager claims.
Her likely successor in tech policy, Finnish politician Henna Virkkunen, plans to propose legislation to encourage the uptake of AI and cloud technologies if confirmed as a commissioner responsible for "tech sovereignty, security and democracy" (see here).
In written answers to European Parliament members ahead of a live scrutiny session scheduled for Nov. 12, Virkkunen said she would present an “AI and Cloud Development Act” that would aid “large-scale investments in cloud and AI facilities.”
A second initiative, Apply AI Strategy, is meant to boost innovation in AI. The goal is to promote the rapid deployment of new AI solutions across leading industrial sectors and public administrations, she said (see here).
— Consent v Legitimate Interest —
Training AI models is key for the technology’s progress, but it has been developed in a legal gray area with questions over the harvesting and processing of personal data and copyright-protected content. Public authorities and private litigators are scrambling to clarify and enforce how legal obligations apply.
The EU's General Data Protection Regulation requires that companies have a legal basis for data processing. Meta has argued that its choice of “legitimate interest” as the legal basis to process data to train its AI models is the “most appropriate balance.”
Campaign groups across Europe have called for Meta to change its legal basis to consent (see here). Vestager expressed skepticism about the use of “legitimate interests” for AI model training, arguing that Europe has a different approach.
It would be a mistake for generative AI model providers to focus only on personal data by arguing that they have “a legitimate interest to use whatever [data] in order to develop” their models, she said.
In Europe, data that is created with public funding — from traffic, weather, health or satellite systems — is made available.
By using only personal data and not bothering with public data, Vestager said model providers may “eventually run out of data because you use what you find online, and then you get into the problem that you do not have specific training data."
— AI partnerships —
Companies such as Microsoft and Amazon have courted controversy by arranging partnerships with smaller AI companies that allow them to scale their inventions but raise questions over the exact nature of the larger companies’ involvement.
OpenAI, Anthropic, Mistral AI and Inflection AI have all benefited from the support of major tech companies, but have often seen staff leave to join the bigger investor.
Competition authorities have viewed such partnerships skeptically, voicing concerns that they might be mergers by the backdoor. They have looked into the investments, assessing whether they can be caught by merger law. But, as yet, they have failed to intervene.
In September, the European Commission said that Microsoft's deal with Inflection AI amounted to M&A activity but that it didn't harm competition (see here).
This growing trend doesn’t mean that the rulebook should be changed, Vestager said. She believes existing merger law is “sufficiently flexible” to deal with such activity.
“What we have found so far is that there have not been sufficient doubts to open official investigations. But I definitely think that you could do it if you found that this was actually seizing control, and you would have it analyzed, investigated in order to be able to go ahead,” she said.
But it is too soon to develop any guidelines on such practices since there have been few formal cases, Vestager explained.
— AI’s new regulatory landscape —
The next leap in AI development will fall to Vestager’s successors to regulate. Alongside Virkkunen, Spanish socialist Teresa Ribera will take over the competition-policy powers currently held by Vestager.
In competition, regulators will be under pressure to stop the likes of Microsoft, Google and Nvidia increasing and potentially abusing their market power. And for regulation, Virkkunen will have to decide if existing rules governing digital markets can capture the use of AI applications in areas such as online search and social networks.
While the EU led the AI regulatory vanguard in recent years, the attention has shifted to how the bloc will enforce its new rulebook. Hard work lies ahead in putting the AI Act’s governance architecture in action.
Now that the lawmaking chapter is closed, companies will be watching how much the EU lives up to its ambition of elevating the AI Act as the global standard without driving away innovators and the next phase of the AI revolution.
Please e-mail editors@mlex.com to contact the editorial staff regarding this story, or to submit the names of lawyers and advisers.