From privacy to antitrust to civil rights, artificial intelligence products are subject to a long list of California state laws including several that became effective this year, California’s attorney general warned in issuing legal advisories that could be a prelude to enforcement actions. Attorney General Rob Bonta's twin AI advisories on applicable AI laws and the use of AI in healthcare detail both a dozen newly enacted California AI laws, as well as laws like the California Invasion of Privacy Act and the Cartwright Act that are more than half a century old.
From privacy to antitrust to civil rights, artificial intelligence products are subject to a long list of California state laws, including several that took effect Jan. 1, California’s attorney general warned in issuing legal advisories that MLex has learned are a likely prelude to enforcement actions.California Attorney General Rob Bonta in recent weeks issued two AI advisories, one to the health care industry and one for the full economy that details consumer protection, false advertising, civil rights, competition and data privacy laws that the state plans to apply to AI. A key theme is transparency, with the attorney general warning that companies are responsible for making sure consumers understand how their information is used to develop and train AI systems.
The attorney general also warned developers and entities using AI that they're responsible for regular auditing and testing “to ensure that their use is safe, ethical, and lawful, and reduces, rather than replicates or exaggerates, human error and biases.”
The AI advisories are significant because they aren't just abstract guidelines but indicate areas of focus where the California Department of Justice has active investigations and where enforcement actions could come, MLex has learned. The exact focus of those probes couldn't be determined.
“AI might be changing, innovating, and evolving quickly, but the fifth largest economy in the world is not the wild west; existing California laws apply to both the development and use of AI,” Bonta said in a statement about the release of the legal advisories (see here). “Companies, including healthcare entities, are responsible for complying with new and existing California laws and must take full accountability for their actions, decisions, and products.”
California wouldn't be the first US regulator to bring enforcement actions, but as the most populous US state and home to large AI companies like OpenAI and Anthropic, the advisories signal the California DOJ could emerge as an important AI regulator. The US Federal Trade Commission last year launched multiple enforcement actions involving sensitive uses of AI by private companies, from drug store chain Rite Aid’s deployment of an allegedly flawed facial recognition systems to false claims about the capabilities of AI systems to detect weapons (see here and here).
The California attorney general’s regulatory remit is even broader than the FTC’s, however, because it includes nonprofits and government agencies that are outside the jurisdiction of the federal enforcer. And under the administration of President Donald Trump, it isn't yet clear whether the FTC will be as aggressive on AI enforcement as the just-concluded Biden administration.
—New and old AI laws—
Companies that use AI must be aware of a dozen new California AI laws that took effect at the start of 2025, the attorney general warned.
Those new laws focus on disclosure requirements for businesses, the unauthorized use of a person’s likeness by AI systems, how AI is used in election and campaign materials, and prohibitions and reporting requirements for exploitative uses of AI. One example is AB 2013 which, starting Jan. 1, 2026, will require AI developers to disclose information on their website about their use of AI training data, “including a high-level summary of the datasets used in the development of the AI system or service.”
Another new law called out by the attorney general will require developers starting next year to include “visible markings” to identify AI-generated content and “to make free and accessible tools” that will allow consumers “to detect whether specified content was generated by generative AI systems.”
Some laws that the Office of the Attorney General clarified that it plans to apply to AI systems are many decades old, however. Among those are California’s state antitrust law, the Cartwright Act; the state’s Unfair Competition Law, and the California Invasion of Privacy Act, a statute passed in 1967.
The state’s Unfair Competition Law might be particularly applicable, the attorney general said, to the use of AI to “foster deception” such as by impersonating a person through deepfakes, chatbots, or voice clones that could “create and knowingly use another person’s name, voice, signature, photograph, or likeness without that person’s prior consent.”
Privacy laws that the attorney general said will be applied to AI are newer, such as the California Consumer Privacy Act (CCPA), which gives Californians the right to opt out of the sale or sharing of their personal information and the right to limit the use and disclosure of their sensitive personal information. The Cartwright Act, passed in 1907, prohibits anticompetitive trusts and the attorney general warned dominant AI companies they must be aware to not violate the law’s prohibitions, even if the violation is inadvertent.
“AI developers and users should be aware of any risks to fair competition created by AI systems, such as those that set pricing,” the attorney general said. “Even inadvertent harm to competition resulting from AI systems may violate one or more of California’s competition laws.”
The state will also monitor compliance with California’s Unruh Civil Rights Act, Bonta said, suggesting the federal move under Trump to move away from a regulatory focus on diversity and equality won’t happen in California.
“We have seen AI systems incorporate societal and other biases into their decision-making,” Bonta said, referring to the attorney general’s 2022 investigation into racial and ethnic bias in healthcare algorithms. “Developers and users of AI should be wary of these potential biases that may be unlawfully impacting Californians.”
— Healthcare —
Under both federal and California law, certain hospitals, health care providers and health insurers must protect patients’ civil rights when AI tools are used for patient care, federal and California officials said in recent guidance (see here). Transparency is a key element of both the federal and state guidance.
The California attorney general’s healthcare advisory is aimed at healthcare providers, insurers, vendors, investors, and other healthcare entities “that develop, sell, and use AI and other automated decision systems.” It warns those groups that they should “be transparent with patients about whether patient information is being used to train AI and how providers are using AI to make decisions affecting health and healthcare.”
With healthcare, AI providers will be monitored for compliance with both new and older laws, Bonta warned. Healthcare entities using AI must comply with state laws such as the California Confidentiality of Medical Information Act, as well as the main federal health data privacy law, the Health Insurance Portability and Accountability Act.
One new law Bonta called out is SB 1120, which requires insurers have licensed physicians to “supervise the use of AI tools that make decisions about healthcare services and insurance claims.”
AI can improve individual patient benefits and provide public health benefits, while improving appropriate information sharing. But it also “risks causing discrimination, denials of needed care and other misallocations of healthcare resources, and interference with patient autonomy and privacy,” the attorney general warned.
Please e-mail editors@mlex.com to contact the editorial staff regarding this story, or to submit the names of lawyers and advisers.