Character AI, a platform that lets users engage in role-playing games with AI chatbots, has filed a motion to dismiss a lawsuit brought against it by the parent of a teenager who committed suicide after allegedly connecting with the company’s technology.
In October, Megan Garcia filed a lawsuit against Character AI in the U.S. District Court for the Middle District of Florida, Orlando Division, over the death of her son, Sewell Setzer III. According to Garcia, her 14-year-old son developed an emotional bond with a chatbot in Character AI, “Dany,” who he constantly texted — to the point where he began to withdraw from the real world.
Following Setzer’s death, Character AI said it will roll out a number of new security features, including improved detection, response and intervention regarding conversations that violate its terms of service. But Garcia is fighting for additional handrails, including changes that could result in character AI chatbots losing their ability to tell personal stories and anecdotes.
In the motion to dismiss, counsel for Character AI asserts that the platform is protected from liability by the First Amendment, just as computer code is. The motion may not convince a judge, and Character AI’s legal reasoning may change as the case proceeds. But the move probably hints at early elements of Character AI protection.
“The First Amendment prohibits liability against media and technology companies arising from allegedly harmful speech, including speech allegedly resulting in suicide,” the filing said. “The only difference between this case and those that have come before is that some of the speeches here involve AI. But the context of the expressive speech — whether it’s a conversation with an AI chatbot or an interaction with a video game character — doesn’t change the First Amendment analysis.”
To be clear, AI Character Advisor is not asserting the company’s First Amendment rights. Rather, the motion argues that AI’s character User would have their First Amendment rights violated if the lawsuit against the platform were to succeed.
The motion does not address whether Character AI can be held harmless under Section 230 of the Communications Decency Act, the federal security law that protects social media and other online platforms from liability for third-party content. The law’s authors have hinted that Section 230 doesn’t protect AI production like Character AI’s chatbots, but it’s far from a settled legal issue.
Counsel for Character AI also claims that Garcia’s real goal is to “shut down” Character AI and push for legislation regulating technologies like it. If the plaintiffs are successful, it would have a “chilling effect” on both Character AI and the entire nascent AI generation industry, the platform’s counsel says.
“In addition to counsel’s stated intent to ‘stifle’ Character AI, (their complaint) seeks drastic changes that would materially limit the nature and volume of speech on the platform,” the filing states. “These changes will fundamentally limit the ability of millions of Character AI users to generate and participate in character conversations.”
The suit, which also names Character AI’s corporate beneficiary Alphabet as a defendant, is just one of several lawsuits Character AI is facing regarding how minors interact with AI-generated content on its platform. Other lawsuits allege that Character AI exposed a 9-year-old to “hypersexual content” and promoted self-harm to a 17-year-old user.
In December, Texas Attorney General Ken Paxton announced he was launching an investigation into Character AI and 14 other tech firms over alleged violations of the state’s online privacy and safety laws for children. “These investigations are a critical step toward ensuring that social media and AI companies comply with our laws designed to protect children from exploitation and harm,” Paxton said in a press release.
Character AI is part of a burgeoning industry of companion AI applications — the mental health effects of which are largely unstudied. Some experts have expressed concern that these apps could exacerbate feelings of loneliness and anxiety.
Character AI, which was founded in 2021 by Google AI researcher Noam Shazeer and which Google reportedly paid $2.7 billion to “reclaim”, has maintained that it continues to take steps to improve security and moderation . In December, the company rolled out new safety tools, a special AI model for teenagers, blocks on sensitive content and more prominent statements letting users know that its AI characters aren’t real people.
Character AI has gone through a series of personnel changes after Shazeer and the company’s other co-founder, Daniel De Freitas, left for Google. The platform hired a former YouTube executive, Erin Teague, as chief product officer and named Dominic Perella, who was General Counsel of Character AI, interim CEO.
Character AI recently began testing web games in an effort to increase user engagement and retention.
TechCrunch has an AI-focused newsletter! Sign up here to get it in your inbox every Wednesday.