The European Union was the first jurisdiction to pass a comprehensive legal framework for regulating AI. But regulators in many US states are racing to enshrine their own rules, as they are in Asia (see here). The UK, in contrast, is mastering the art of “wait and see” as regulators fear a choke on innovation.
Data protection authorities in the Group of Seven club of nations are ensuring that a code of conduct on developing advanced AI models is followed by Big Tech, academia and other organizations.
The EU’s current digital chief, Margrethe Vestager, warns about regulatory complacency and human laziness as companies integrate generative AI into their products and services, from human resources software to algorithmic trading that does deals in nanoseconds.
"My biggest fear is that we get lazy,” she told MLex in a recent interview (see here), just a few weeks before she steps down next month after a decade as a European Commissioner. “It's so easy as humans to lean back and say: ‘These machines are so good. Why bother? They'll fix those things.’ ”
The EU’s AI Act, which lawmakers rushed to agree last December — just over a year after OpenAI’s ChatGPT opened the world’s eyes to computers’ ability to conjure up original text or images — set a framework for regulating AI based on its capacity to cause harm.
The landmark law entered into force in August, starting the clock on a series of statutory deadlines over the coming three years (see here). The EU’s legislative effort might be over, but wearing the hat of regulator will be an even more arduous challenge. And the world will be watching how much the EU lives up to its ambition of elevating the AI Act to be the global gold standard.
— EU implements its AI Act —
Now that the EU's lawmaking chapter is closed — not to forget the fraught couple of years of hard legislative bargaining that captured the attention of players in the nascent AI industry — the focus has shifted to its regulatory posture.
The AI Act’s governance architecture is divided into two layers: broadly speaking, national authorities will oversee risky AI systems, while the European Commission’s AI Office will police general-purpose AI model providers such as OpenAI and Anthropic.
The AI Office is under enormous pressure to keep up its deliverables while also building up internal capacity to regulate arguably the most complex technology of our time (see here).
Its first and most urgent task is to issue a code of practice that general-purpose AI model providers can use to show their compliance with the upcoming rules. A first draft of the code for providers — the likes of OpenAI, Meta Platforms and Anthropic — is currently expected in mid-November after some delay.
The code aims to provide a crucial compliance tool for model providers. Involved in its development are around 1,000 participants from AI companies, downstream players, independent experts, stakeholders and civil society organizations.
It will only succeed if it effectively applies the AI Act’s provisions and the model providers adhere to it. Many of them, however, are ready to draw a red line if it goes beyond the underlying legislation, especially if it jeopardizes trade secrets.
That is the particular risk that model providers see on the “sufficiently detailed summary” they must provide for copyright-protected data they feed into their models. On the thorny issue of AI and copyright, EU countries have also launched a reflection (see here).
Another headache for the AI Office concerns the technical standards, which will be critical tools for reducing compliance costs, something especially important for smaller players.
As things stand, the EU’s standardization process is already behind schedule, and only a partial delivery is likely by the time the legal requirements kick in in August 2026 (see here).
Other deliverables loom, notably the guidelines to clarify the AI definition and the use cases that the act bans, which are due by next February. As the act’s text was rushed into a political agreement, several vital provisions would benefit from being clarified.
At the national level, EU countries are just getting started setting up their internal governance structures and are similarly thirsty for the commission to explain the legal requirements. As new deadlines approach, more doubts will arise (see here).
In this hectic context, flaws in the legal text have started to be noticed. For instance, no authority will be in place to enforce the law’s bans for the first six months (see here).
At the same time, the ambiguity of certain provisions is prompting anxiety that companies' regulatory risk might be larger than anticipated, for example, with obligations emerging from simply fine tuning an AI model (see here).
— US legislative patchwork —
While the EU is taking a comprehensive approach, in the United States, concerns about chilling innovation are interfering with efforts to pass new legislation, at both the federal and state levels.
But new AI laws are coming, US lawmakers insist. They’re taking a more incremental approach and aren’t looking to pass one comprehensive federal AI law modeled on the EU's new law, Republican members of the US Congress who sit on AI taskforces said in September (see here).
Future AI proposals will be “embedded” in nearly every piece of legislation passed by the Senate and House of Representatives in the coming years, South Dakota Senator Mike Rounds said recently at an AI conference, and every House and Senate committee will have AI regulations built into their top legislative proposals.
The focus won’t just be on preventing harm. Investing in AI will also be top priority, he said, pointing to the Senate’s “road map” for regulating AI that proposed a $32 billion annual investment in AI research and development by 2026.
At the same time, US lawmakers and the tech industry have been pushing back against states' efforts to pass more comprehensive AI legislation (see here), arguing that states are creating a confusing patchwork of regulations that will hurt small startups.
Most state lawmakers have focused on regulating the most problematic use of AI: deepfakes. For example, 19 US states have passed laws aimed at preventing misinformation or requiring disclosure to address the use of AI in elections, some of which are already being challenged in court.
Colorado is the only US state to pass an AI law focused on consumer protections and high-risk AI systems. Starting in February 2026, makers and deployers of high-risk AI systems in Colorado will also have to be far more transparent about how their technology operates, how it’s used, and whom it could hurt.
But concerns about the potentially negative impact on the state’s budding AI industry almost killed the bill. Colorado Governor Jared Polis, a Democrat, signed the bill with “reservations.”
California was home to this year’s most controversial and closely watched AI safety bill. SB 1047 aimed to prevent AI systems from causing catastrophes and required “kill switches” that could deactivate AI systems if they went rogue or caused harm to consumers. Federal lawmakers, including former House Speaker Nancy Pelosi, and the tech industry, said the bill went too far and would hurt innovation.
California's governor, Gavin Newsom, agreed and vetoed the bill, saying it gave the public a "false sense of security" by focusing only on the "most expensive and large-scale models,” and could curtail "the very innovation that fuels advancement in favor of the public good.”
Fears about unintended consequences killed another risk-based AI bill in Connecticut that aimed to regulate AI on a comprehensive scale more like the EU's AI Act. Business groups and Republican legislators argued it would drive away investment and cripple new AI businesses, especially small startups. Governor Ned Lamont said he would veto it if it came to his desk.
In the absence of a federal AI law, the Biden Administration is turning to self-regulation and voluntary commitments to prevent catastrophes. Amazon, Anthropic, Apple, Google, Inflection, Meta Platforms, Microsoft and OpenAI are all now working to fulfill voluntary safety commitments they made to the White House last year, including conducting internal and external security tests of their latest AI systems before they’re released.
Federal agencies are also stepping up scrutiny around AI and taking enforcement actions to stop misuse and protect competition, including the US Department of Justice and the US Federal Trade Commission. Agency officials are hearing similar complaints that their efforts are stifling investment and innovation, but they said they’re not overstepping their authority.
“There is no AI exception to antitrust enforcement,” said Mike Mikawa, a trial lawyer with the DOJ’s antitrust division, told a conference in September. “It seems pretty simple that we want people and competitors in the market to deal with each other fairly, and for no single entrenched player to potentially block parts of the market or foreclose or make chokeholds on essential inputs.”
— UK relies on regulators not rules —
For the UK, the future is perhaps the most opaque, at least on the legislative side. Regulating everything but AI could be a better plan.
The country is marooned between the US and EU in a post-Brexit, post-Covid, post-energy shock slump with a global identity crisis and flatlining economy and productivity. That said, its AI sector is doing very well. This will come in handy for the new government’s overarching goal for economic growth to cure all ills.
Global AI rankings often put the UK in third place in terms of research, infrastructure, talent and government policy after the US and China, although it lags somewhat in infrastructure.
The center-left Labour administration that took office in July has abandoned investment promises by the previous government in national computational power, shifting its focus to private-sector data centers and including them as critical national infrastructure with government protections.
What it really needs is increased business investment in and uptake of AI to boost growth and productivity.
Regulation could bring some clarity to businesses developing or deploying AI tools that may be looking to the EU's legal framework for guidance in the meantime. Regulatory changes that tackle high energy prices, ease visas for talented workers or ease planning rules for AI infrastructure could prove as effective as any dedicated legislation.
Aware of this, the UK is actively waiting and seeing what happens before regulating on AI. The Labour party, which took over from the Conservatives in government in July, promised to bring legislation for safety testing frontier models and putting the UK's AI Safety Institute on a statutory footing. But officials gave no timeline or legislative detail.
There is a growing consensus that what needs reforming are regulators themselves. They need more capacity and skills to be able to challenge what companies are doing. The mechanism that brings existing regulators together — a model held in increasingly high regard around the world — itself needs more powers, and so does the national competition authority.
For now, the government is launching a new agency, the "Regulatory Innovation Office," to tell regulators how to remove barriers to tech adoption.
But Mustafa Suleyman, co-founder of DeepMind and now CEO of Microsoft AI, told an event in London recently that there's a need for regulatory creativity, within the bounds of social acceptability, and a shift in watchdogs' relations with businesses. Regulators can take risks as much as companies do, Suleyman said.
“Regulators have to be more experimental, or more risk-taking, move faster and really form relationships, because a lot of these [enforcement] cases take a decade to pass through," he said. "And actually, by that time, the entire landscape has changed.”
Please e-mail editors@mlex.com to contact the editorial staff regarding this story, or to submit the names of lawyers and advisers.