A wrongful death lawsuit against Character Technologies over a Florida teen’s suicide is one of the first to closely examine whether conversations generated from artificial intelligence-powered chatbots are protected speech, and the outcome could have profound implications for the AI industry.
A wrongful death lawsuit against Character Technologies over a teen’s suicide in Florida is one of the first to closely examine whether words generated from artificial intelligence powered chatbots are protected speech, and the outcome could have profound implications for the AI industry.US District Judge Ann Conway in Florida allowed much of the case to move forward last week and rejected "for now" the company’s arguments that conversations with Character.AI, an app that allows users to interact with various chatbots, are protected by the First Amendment (see here).
The lawsuit was filed last fall by the mother of Sewell Setzer III, a Florida teen who allegedly died of a self-inflicted gunshot wound after he interacted with a Character.AI chatbot based on a character from the television show “Game of Thrones."
Conway also rejected efforts by co-defendants Google and Character.AI co-founders Noam Shazeer and Daniel De Freitas to escape the lawsuit, potentially opening the door to future product-liability cases involving AI.
But the question of whether outputs from chatbots are protected speech hasn't been answered, yet. Conway, who was appointed to the bench during the George H.W. Bush administration, also instructed Character.AI to prove through evidence-gathering that a string of words from a chatbot are protected by the First Amendment, saying she’s not prepared to decide “at this stage.”
The defendants failed to articulate "why words strung together" by an AI system should be considered speech, Conway said, and noted that the court must determine first whether generative AI output constitutes expressive “speech” at all.
Now Menlo Park, California-based Character.AI will begin searching for evidence as the case moves toward summary judgment, hoping to change Conway’s mind and show that an AI chatbot’s output should be considered speech.
The plaintiffs will want to begin questioning executives from Character.AI and co-defendant Google about what they knew about the chatbot’s potential risks and when.
Chatbots may produce words “but they don't have the kind of expressive intent behind them that words usually have when they're put together in a book or a song or spoken or sung,” plaintiffs’ attorney Meetali Jain told MLex.
Both sides’ forthcoming motions for summary judgment will likely address Character.AI’s argument that interactions with chatbots are just like interactions with characters on video games and social media sites, both of which have received First Amendment protection.
Conway’s order requested a deeper analysis of Character.AI’s “analogies” because users could have a First Amendment interest in receiving chatbot outputs, and whether chatbots also communicate ideas that could be considered speech.
Conway said her decision “does not turn on whether Character AI is similar to other mediums that have received First Amendment protections; rather, the decision turns on how Character AI is similar to the other mediums.” In other words: Is Character AI comparable to video games and social media because it also communicates ideas that are protected speech?
There are also exceptions to the First Amendment protections for certain kinds of negligent speech that cause physical harm, such as incitement to violence, threats, defamation and hate speech.
Legal experts are divided about Conway’s initial First Amendment analysis, with some praising her findings. It’s “absurd” that a Silicon Valley company would argue it isn’t responsible for harmful content its chatbots produce, said David Evan Harris, an expert on AI ethics who teaches at the University of California Berkeley’s business school.
“The First Amendment is designed to protect the freedom of expression of people to say what they want,” Harris said. “It is not designed to protect carelessly designed chatbots from encouraging children to commit suicide.”
But others are skeptical and argue Conway’s ruling could lead to a wave of litigation that could stifle the AI industry.
“If plaintiffs’ lawyers can hold GenAI model makers liable for harms people experienced, then the plaintiffs’ lawyers win and GenAI is done,” said Eric Goldman, a law professor at Santa Clara University School of Law.
“It’s just purely an existential question,” Goldman said. “Can they exist? There’s no way to build a GenAI model that doesn’t have the capability to commit harm.”
An order in a lawsuit against Roblox in Illinois federal court over video game addiction could prove instructive during summary judgment, even though Conway didn’t address it in her order last week, Goldman said.
In April, US District Judge April M. Perry rejected claims that Roblox addicted users on several First Amendment grounds, finding that video games are recognized forms of protected expression. The parties have since settled the claims, according to a May 14 court filing.
“Plaintiffs label Roblox ‘addictive,’ but this just seems like another way of saying that Roblox’s interactive features make it engaging and effective at drawing players into its world, and First Amendment protections do not disappear simply because expression is impactful,” Perry said. “To the contrary, that is when First Amendment protection should be at its zenith.”
Please e-mail editors@mlex.com to contact the editorial staff regarding this story, or to submit the names of lawyers and advisers.