While Congress has yet to enact comprehensive standards for regulating AI, 45 US states have passed at least one law aimed at regulating the technology in some way, and more are coming. This year, more than 650 AI-related bills have been introduced by state lawmakers, with more than 30 signed into law, according to Multistate, a government affairs firm that tracks AI legislation.
At the same time, governors are issuing executive orders relating to AI in the states of California, Maryland, Massachusetts, Virginia and Washington. State agencies are considering rules that could impact AI development, most notably the California Privacy Protection Agency, or CPPA, the first standalone data-protection authority in the US.
Tech companies and their lobbyists, however, say states are using their legislative laboratories to create regulatory confusion and uncertainty that’s only going to stifle AI innovation. It’s the federal government’s job to regulate AI, not the states, they say.
State legislators agree that Congress should enact baseline AI rules, without preempting stronger state laws. But waiting for congressional action that doesn’t appear likely any time soon isn’t a viable option either, lawmakers say. That's why they're working across state lines hoping to write uniform rules and regulations that companies can follow.
Despite all their efforts, states are still years away from enacting baseline AI regulations that could be copied across the country in the absence of federal legislation. For now, they are taking incremental steps, with most of their efforts aimed at stopping the most pervasive misuse of AI technology: deepfakes.
When state lawmakers have laid out ambitious plans to regulate AI more broadly, the tech industry’s opposition has been concerted, swift and successful.
Lawmakers in California and Connecticut learned that lesson the hard way this year when their more comprehensive proposals to regulate AI risk died after facing opposition from their own governors, from their own political parties.
— States lay groundwork —
US states have often outpaced Congress to address risks associated with new technologies. In the absence of a federal data breach law, all 50 states enacted their own. Since 2018, 20 states, led by California, have enacted consumer data privacy laws, while partisan divisions in Congress continue to delay passage of baseline federal privacy rules.
The same scenario is playing out with AI regulation. Over the last two years, US states began laying the groundwork to regulate the private sector’s use of AI. At least 16 states passed laws to set up some type of AI commission or study group using the Biden Administration’s Blueprint for an AI Bill of Rights as their guide.
With no federal law banning or regulating deepfakes, another top priority for states has been passing laws banning simulated images, audio recordings, or videos that have been altered or manipulated to misrepresent a real person.
More than 30 US states have enacted laws that address some type of deepfake, such as criminalizing deepfakes involving minors engaging in sexual conduct or deepfakes aimed at election interference.
Some states are trying to protect key industries. Tennessee's Ensuring Likeness Voice and Image Security (Elvis) Act protects musicians from AI-generated audio that mimics their singing voice.
The trend continued this year. California Governor Gavin Newsom signed bills designed to protect individuals from misuse of their digital content (see here). Under the legislation, large social media sites will have to label or remove election-related deepfakes, political advertising generated using AI must be disclosed, and unlabeled deepfakes of candidates 120 days before an election will be banned.
— The EU’s influence —
This year, several states went further by introducing bills aimed at increasing transparency when AI is used and mitigating potential bias and discrimination from the use of AI products, with varying degrees of success.
Much of the attention has been focused on California, home to some of the world’s biggest tech companies, and the tech industry’s most aggressive regulator. California passed the first data breach and comprehensive consumer data protection laws in the US. It established the CPPA as the first agency in the US to oversee data privacy.
State lawmakers turned their attention this year to AI. California is home to 35 of the top 50 AI companies in the world, according to Newsom, and legislators were eager to set some initial guardrails around the industry.
They introduced more than 50 laws to regulate aspects of AI, according to Multistate. Newsom said he has signed about a dozen of those AI bills into law this year.
He signed one bill that lays the groundwork for future AI rules by defining AI as "an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments."
He also signed the California AI Transparency Act (SB 942), which requires companies to offer AI detection tools at no charge so the public can tell the difference between AI and reality and watermark AI-generated content, and the Generative Artificial Intelligence Accountability Act (SB 896), which would require state agencies to assess the risk of using generative AI and disclose to the public when it is used.
State legislators looking to pass more comprehensive AI rules have also turned to the EU for guidance, and officials there have been happy to help. A map of every state where AI legislation was introduced this year hangs in the EU’s San Francisco office.
There should be room for states to experiment and see what works, said Gerard de Graaf, the EU’s Senior Envoy for Digital to the US. But the goal should be uniformity and harmonization across states, de Graaf has warned, because a wave of different AI rules and regulations will lead to market fragmentation, “which is something the EU has suffered from tremendously.”
Emulating the EU’s comprehensive approach, however, isn’t the goal of most state legislatures. The same day that the European Parliament adopted the EU's AI Act — March 13 — Utah passed the first law in the US specific to generative AI.
But the Utah law, which took effect in May, is much narrower and only imposes liability on businesses for inadequate or improper disclosures of generative AI. It does create an Office of Artificial Intelligence Policy to administer a state AI program.
EU influence is more apparent in Colorado’s new AI law, passed a few weeks after Utah’s. Like the AI Act in Europe, Colorado’s AI law focuses on consumer protection and high-risk AI systems.
The EU's AI Act bans emotional recognition technology in schools and the workplace, prohibits social credit scores that reward or punish certain kinds of behavior, and prohibits predictive policing in certain instances. It also applies high-risk labels to AI in health care, hiring and issuing government benefits.
Similar provisions will apply in Colorado, starting in February 2026. Makers and deployers of high-risk AI systems in Colorado will also have to be far more transparent with the public about how their technology operates, how it’s used and whom it could hurt.
Colorado's new AI law also imposes now-familiar notice, documentation, disclosure and impact assessment requirements on developers and deployers of “high-risk” AI systems. Much like the AI Act in the EU, those are defined as any AI system that “makes, or is a substantial factor in making, a consequential decision,” such as housing, lending and employment. Makers and deployers will have to disclose the types of data used to train their AI.
Concerns about potentially negative impacts on the state’s budding AI industry almost killed the bill. Colorado Governor Jared Polis, a Democrat, said he signed the bill with “reservations.” He urged the state legislature to carefully consider amendments before the law takes effect in 2026, saying it could hurt the state’s budding AI industry, particularly small startups.
The legislation's sponsor, Colorado State Sen. Robert Rodriguez, said he's already working on updates to the law to address some of those concerns, along with a 25-member task force that includes representatives from tech industry, civil rights groups, and academics.
"Our plans are to start digging into some policy definitions and parts of the policy that could use some tweaks or some attention," he said, such as fine-tuning reporting requirements to the state attorney general's office.
Similar fears about unintended consequences did fell another risk-based AI bill earlier this year in Connecticut. Backers of Connecticut’s SB 2 argued it was the first of its kind in the US that tried to regulate AI on a comprehensive scale like the EU's AI Act.
Business groups and Republican legislators argued it would drive away investment and cripple new AI businesses, especially small startups. The bill died after Governor Ned Lamont, a Democrat, said he would veto it if it came to his desk.
Lawmakers whose efforts faltered this year are planning to introduce similar proposals next year, including Connecticut Sen. James Maroney, the sponsor of SB 2.
He and Rodriquez are heading up a bipartisan, multi-state AI task force will also continue coordinating proposed AI regulations across state lines. Typically about 40 legislators from around the country participate in regular meetings and their mailing list includes almost 200 legislators from 47 states, and they're still working on the last three, they said.
"Saying to wait for the federal government to act is saying you don't want legislation because they've been looking at comprehensive data privacy since 2001 and haven't passed anything," Maroney said.
The last comprehensive privacy legislation Congress passed was the Children's Online Privacy and Protection Act of 1998, he said, " so I think it is up to the states to act."
— Setbacks in Sacramento —
Even state agencies are divided over rules that could impact AI development. In California, the CPPA is considering whether to formally advance proposed rules for automated decision-making technologies, or ADMT, that would require notice before the technology is used and allow consumers to opt out of its use. But board members say they’re worried the proposed rules are too broad and sweeping and could end up hurting many companies.
The tech industry and other business groups successfully worked to defeat broader risk-based AI proposals in California this year, too.
The demise of Connecticut’s AI bill did not deter California Senator Scott Wiener, a San Francisco Democrat who introduced perhaps this year's most controversial and hotly contested state proposal to regulate AI, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047).
The bill aimed to address the potentially catastrophic threats posed by AI and would have required developers of advanced AI models to test them for their ability to enable attacks on digital and physical infrastructure or to help make chemical, biological, radioactive and nuclear weapons. AI developers would have to include a “kill switch” that could deactivate their AI systems if they went rogue or caused harm to consumers. It also protected whistleblowers who reported such threats from inside tech companies.
The bill passed both legislative chambers by wide margins. It had the backing of EU officials such as de Graaf, who said it would help align AI regulations in California and Europe.
But under the bill, AI developers would have faced steep fines if their products were later used to cause harm, such as loss of life or cyberattacks costing more than $500 million in damages. AI companies and even fellow Democrats in Congress pushed back hard, urging Newsom to veto the bill, arguing it would impose unreasonable liability on open-source AI developers that aren’t seeking to make a profit and can’t control what customers might do with their technology.
Wiener narrowed the bill and eliminated a proposed agency called the Frontier Model Division. He also made several changes specifically in response to concerns about open-source systems, such as clarifying a requirement that developers would only have to shut down AI models deemed risky that are currently in their possession. Developers would also not be responsible if someone were to tweak an open-source AI model and turn it into, effectively, a different model.
But Newsom vetoed the bill in late September, saying it would give the public a "false sense of security" because it focused only on the "most expensive and large-scale models” and could curtail "the very innovation that fuels advancement in favor of the public good.
Other California bills supported by the EU also failed to advance this year due to opposition from the tech industry, including AB 2930, which would have required developers and deployers of automated-decision tools that make consequential decisions about consumers to perform impact assessments. Consumers would have the right to know when the tools are being used and to opt out.
Another bill backed by de Graaf that would have required AI makers to label AI-generated content, AB 3211, passed out of the Assembly, but died in the Senate.
Despite these defeats, state lawmakers aren't giving up on passing more comprehensive, risk-based AI legislation.
That means the tech industry and its lobbyists will keep pushing back in state capitals across the country and urging Congress to stop states from experimenting with a patchwork of overlapping regulations that they warn could bring the lucrative AI industry to a standstill.
Please email editors@mlex.com to contact the editorial staff regarding this story, or to submit the names of lawyers and advisers.