The global AI summit series will continue, with its next iteration now to be fully hosted by India. But the Paris summit this week saw a stark downgrading of previous headline concerns over AI safety and regulation, and a new front line emerge of “democratic” versus “autocratic” AI. The summit series is keeping the global conversation on AI issues going, but the focus has shifted to national competition, not protecting against global risks.
The AI show is still on the road, but have safety and regulation been dumped along the way? The US and UK snubbing the Paris AI Action Summit declaration this week underlined how nations have refocused on the scramble to win the global race to cash in on artificial intelligence, rather than regulate it or address its multitude of risks.
"Big AI" is now pushing hard on “democratic AI” versus “autocratic AI” — i.e. the West versus China — as open-source versus closed-source technology becomes a new frontline. Under huge pressure, the EU started retreating on AI regulation during the summit itself, with promises to slash more red tape. India as the next host leaves everything wide open.
The summit was certainly a big deal: it brought business and government together, brought China back to the table and laid foundations for the next summit. It was helped by some huge recent talking points, such as the explosion onto the scene of China's DeepSeek, plus Donald Trump’s second US presidential term and his Stargate AI infrastructure investment plan.
But AI safety and regulation — the paramount focus of the debut summit in the UK in late 2023 — became fringe topics at best.
US Vice President JD Vance summed it up in his speech: “I'm not here this morning to talk about AI safety, which was the title of the conference a couple of years ago. I'm here to talk about AI opportunity.”
Claire Fernandez, executive director of digital rights organization EDRI told MLex, "[President] Macron delivered a Trumpian show of force, pushing in full swing for AI investments, economic competition, industrial policies and de-regulation agendas. The summit completely ignored the local and global harms of AI on people and the environment.”
Even a modest mention of safety in the summit declaration ended up watered down.
The UK's Bletchley Park summit, and a second summit in Seoul last year, are recognized for pressing global safety cooperation, and an early draft of the declaration seen by MLex (see here) stated: “We committed to a clear monitoring of the progress achieved on the voluntary commitments [there]." But in the final declaration this became: “We note the voluntary commitments launched there” (see here).
The work of the global network of AI safety institutes was dropped from the Paris declaration.
One Chinese envoy even disparaged the international safety report commissioned by the Bletchley Park summit and published just ahead of Paris.* Fu Ying, an academic at Tsinghua University and China’s former vice minister of foreign affairs, told lead author Yoshua Bengio it was “very, very long” and that she’d only managed the first 100 pages of the 400-page Chinese translation.
— Regulatory breaking point —
The notion of regulation in general took a beating.
Vance told the closing ceremony: “We believe that excessive regulation of the AI sector could kill a transformative industry just as it's taking off, and we'll make every effort to encourage pro-great-growth AI policies.” He said he relished “that deregulatory flavor making its way into a lot of the conversations at this conference.”
Following him onto the stage, European Commission President Ursula von der Leyen announced a $206 billion AI investment plan, a quarter of which was public money. And she said the EU, now deeply focused on global competitiveness, must cut bureaucracy around regulation, without providing any details (see here).
Within hours, in Brussels, the commission said it was dropping its proposal for an AI Liability Directive, and if legislators wanted to keep debating it they had to make the argument for it (see here and here).
Across Paris, EU economy ministers at a side event focused on tweaking the Digital Markets Act to better handle anticompetitive behaviors among the big AI and cloud companies, and France's Europe minister pushed open-source AI as one solution.
OpenAI made some waves by saying Europe wouldn’t get AI infrastructure investment until it overhauled its regulation to be more innovation-friendly (see here).
— Democracy —
The summit saw a new front line open, with conversation shifted away from measurable standards and tricky talks on consensus and regulation towards the soft glow of democratic norms.
From the US side, there seemed to be immediate alignment between government and AI developers. Vance again: “We feel very strongly that AI must remain free from ideological bias, and that American AI will not be co-opted into a tool for authoritarian censorship.”
An OpenAI executive then told a media briefing that competition between the US and “CCP-led China” — a reference to the Chinese Communist Party — boils down to “democratic AI versus autocratic AI.”
For OpenAI, the "global rails of AI" will be built by one of the two countries, and it wants that to be the US: "free, democratic AI, informed by democratic principles and democratic values" rather than "autocratic, authoritarian AI.”
In a statement released around the same time, Anthropic CEO Dario Amodei wrote the "need for democracies to keep the lead, the risks of AI, and the economic transitions that are fast approaching — these should all be central features of the next summit.”
— France to India —
While safety and regulation might be losers, the summit set a positive precedent for the next summit's host, India.
Despite its shift towards authoritarianism under Prime Minister Narendra Modi, India is the world’s largest democracy, meaning a possible continuation of the democratic AI theme, but also a possible clash. Its vision of "democratic" embraces open-source AI, as does China’s, which closed-source proponent the US is against.
“We must develop open-source systems that enhance trust and transparency. We must build quality data sets, free from biases,” Modi said. “We must democratize technology and create people-centric applications” (see here).
India has been getting on with its own open-source projects, low-cost digital services and is selling government technology abroad. Indian speakers at the conference gave a sense of impatience for progress.
“Some people worry about machines becoming superior in intelligence to humans. But no one holds the key to our collective future and shared destiny other than us humans. That sense of responsibility must guide us,” Modi said.
Just don't expect that responsibility to spawn global agreement on setting rules for the technology any time soon.
*‘Governing in the Age of AI,’ Tony Blair Institute for Global Change, Paris, Feb. 9, 2025
Please e-mail editors@mlex.com to contact the editorial staff regarding this story, or to submit the names of lawyers and advisers.