Tech companies are worried they could be held liable for the spread of election-related deepfakes powered by artificial intelligence under proposed legislation in Connecticut and Maryland. Nearly two dozen US states are considering bills this year aimed at stopping election-related deepfakes, and the tech industry wants to make sure the online tools or platforms used to create and share them aren't held responsible.
Tech companies are worried they could be held liable for the spread of election-related deepfakes powered by artificial intelligence under proposed legislation in Connecticut and Maryland, and they’re pushing for changes. Nearly two dozen US states are considering legislation this year aimed at stopping election-related deepfakes as lawmakers say the technology is being used to deceive voters and erode trust in elections, whether it’s Elon Musk sharing deepfakes of former Vice President Kamala Harris or robocalls in New Hampshire impersonating former President Joe Biden.
Dozens of bipartisan bills have been introduced in at least 20 state legislatures, and the proposals share a common if modest goal: setting disclosure requirements for political communications manipulated using AI, with violators facing potential fines, lawsuits and even years in jail.
The bills also include exemptions for online platforms, citing Section 230 of the Communications Decency Act - the controversial US law that shields online platforms from liability for hosting third-party content - or the First
Amendment. Some exclude online platforms if they clearly disclose AI-generated election content.
But new bills introduced in Connecticut and Maryland don’t have any carveouts. As a result, lobbyists for the tech industry are urging their sponsors to clarify that AI companies and online platforms would not be held liable, or the online ecosystem could suffer.
- New California law blocked -
The wave of deepfake proposals across state legislatures continues a trend begun last year, when 15 states enacted laws requiring disclosures on election-related deepfakes, usually during the 90-day period preceding any election or primary, and the penalties for noncompliance could be costly.
New laws in states such as Alabama and Arizona give those hurt by the deepfake the right to sue for damages. In states like Michigan, violators could face stiff fines and potentially criminal charges, including up to five years in prison for repeat offenders.
California passed three new election deepfake laws that have been among the most controversial. Two are already facing legal challenges in federal court from Elon Musk, political satirists, video-sharing platform Rumble and others (see here).
The litigation takes aim at California’s Defending Democracy from Deepfake Deception Act of 2024 (AB 2655), a first-of-its-kind law that requires large online platforms to remove election deepfakes at least 72 hours after receiving a complaint. A court can require companies to comply if they don’t.
They’re also challenging California’s AB 2839, which bans people or groups from knowingly and maliciously sharing election-related deepfakes that might cause harm, or they could be sued.
Political satirist Christopher Kohls, maker of deepfake videos mocking presidential candidate Kamala Harris, argued AB 2839 violates his right to free speech because practically anyone who saw his AI-generated videos could sue him for damages.
California countered that the new statute contains a safe harbor provision for parody and satire that’s constitutional. But US District Judge John A. Mendez in Sacramento, appointed during the George W. Bush administration, disagreed and said the law likely violates the First Amendment (see here).
“While California has a valid interest in protecting the integrity and reliability of the electoral process, AB 2839 is unconstitutional because it lacks the narrow tailoring and least restrictive alternative that a content based law requires under strict scrutiny,” Mendez said.
- Wave of new state bills -
Despite that ruling, states continue to propose bills this year that open the door to lawsuits. Many would impose stiff civil penalties and even criminal penalties for failure to disclose an election deepfake with the intention of causing harm, including jail time for repeat offenders, such as bills in Arkansas (HB 1041), Illinois (HB 1860), South Carolina (H 3517), Montana (SB 25) and Nevada (AB 73).
As introduced, Virginia’s HB 2479 would have imposed a civil penalty of up to $25,000 on violators, who could be charged with a misdemeanor. The bill passed the state House on Feb. 4, but a Senate committee later slashed the potential fine to $50 per violation.
Nearly all the state proposals also include a carveout for online platforms or interactive computer services. Some simply cite Section 230, while other exemptions are contingent upon platforms’ ongoing disclosure of election deepfakes.
But Maryland lawmakers are taking a slightly different approach. Maryland’s HB 525 would prohibit anyone from knowingly and willfully using fraud to influence voters, and fraud would include using deepfakes without disclosures. Violators could be found guilty of a misdemeanor, fined up to $5,000, and/or imprisoned for up to five years.
- Tech pushing for carveouts -
Tech lobbying group TechNet praised Maryland’s effort. But the bill still needs a specific carveout for online platforms and services under Section 230, TechNet Executive Director Margaret Durkin testified at a hearing on Feb. 4.
“Tech Net and its member companies want to ensure that bad actors are held liable and ultimately accountable, and not the platform or online service that may have been used to distribute such deceptive media,” she said.
Connecticut’s HB 6846, introduced by Democratic Rep. Matt Blumenthal, son of US Sen. Richard Blumenthal, would criminally penalize speakers who knowingly distribute unlabeled AI-generated images within 90 days of an election to influence voters. Under the bill, the state attorney, a candidate or anyone who was injured by an election deepfake could also sue for damages.
But there are also no specific exemptions for online platforms in the bill, which means it’s facing concerted opposition from multiple tech and business groups.
The bill’s provisions are too broad and problematic, and they most likely violate the First Amendment because the bill could censor and punish all people’s protected speech, the Center for Democracy and Technology said in written testimony on Feb. 7.
“While federal and state laws commonly place disclaimer requirements on the speech of candidates, committees, and other regulated political entities, it is unusual and alarming to require the same disclaimers, under threat of criminal charges, of regular people participating in the political process, many of whom would likely be unaware that such disclosures are required,” the CDT said.
The proposal could also hurt the online economy, Paul Amarone, a public policy associate with the Connecticut Business and Industry Association, wrote in written testimony. The bill creates so much uncertainty about who would be responsible for complying with the law, “ad platforms may choose to sell less time to election advertisers altogether,” he said.
TechNet also argued a carveout for online platforms is needed in HB 6846 to make clear that liability is limited to the person who created and shared an election deepfake, not the tool or platform used to create and share it, TechNet Executive Director Christopher Gilrein testified.
“We further recommend a clear exemption for red-teaming or other uses of synthetic media for cybersecurity or fraud detection purposes,” he said.
Please e-mail editors@mlex.com to contact the editorial staff regarding this story, or to submit the names of lawyers and advisers.