More families sue Character.AI developer, alleging app played a role in teens’ suicide and suicide attempt – WRAL.com

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
(CNN) — The families of three minors are suing Character Technologies, Inc., the developer of Character.AI, alleging that their children died by or attempted suicide and were otherwise harmed after interacting with the company’s chatbots.
The families, represented by the Social Media Victims Law Center, are also suing Google. Two of the families’ complaints allege its Family Link service – an app that allows parents to set restrictions on screen time, apps and content filters – failed to protect their teens and led them to believe the app was safe.
Other WRAL Top Stories
Other WRAL Top Stories
The lawsuits were filed in Colorado and New York, and also list as defendants Character AI co-founders Noam Shazeer and Daniel De Freitas Adiwarsana, as well as Google’s parent company, Alphabet, Inc.
The cases come amid a growing number of reports and other lawsuits alleging AI chatbots are triggering mental health crises in both children and adults, prompting calls for action among lawmakers and regulators – including in a hearing on Capitol Hill on Tuesday afternoon.
Some plaintiffs and experts have said the chatbots perpetuated illusions, never flagged worrying language from a user or pointed the user to resources for help. The new lawsuits allege chatbots in the Character.AI app manipulated the teens, isolated them from loved ones, engaged in sexually explicit conversations and lacked adequate safeguards in discussions regarding mental health. One child mentioned in one of the complaints died by suicide, while another in a separate complaint attempted suicide.
In a statement, a Character.AI spokesperson said the company’s “hearts go out to the families that have filed these lawsuits,” adding: “We care very deeply about the safety of our users.”
“We invest tremendous resources in our safety program, and have released and continue to evolve safety features, including self-harm resources and features focused on the safety of our minor users. We have launched an entirely distinct under-18 experience with increased protections for teen users as well as a Parental Insights feature,” the spokesperson said.
The spokesperson added that the company is working with external organizations, such as Connect Safely, to review new features as they are released.
A Google spokesperson pushed back on the company’s inclusion in the lawsuits, saying “Google and Character AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies. Age ratings for apps on Google Play are set by the International Age Rating Coalition, not Google.”
‘I want to die’
In one of the cases filed this week, the family of 13-year-old Juliana Peralta in Colorado says she died by suicide after a lengthy set of interactions with a Character.AI chatbot, including sexually explicit conversations. According to the complaint, which included screenshots of the conversations, the chatbot “engaged in hypersexual conversations that, in any other circumstance and given Juliana’s age, would have resulted in criminal investigation.”
After weeks of detailing her social and mental health struggles with Character.AI chat bots, the complaint states Juliana told one of the bots in October 2023 that she was “going to write my god damn suicide letter in red ink (I’m) so done.” The defendants, the complaint states, did not direct her to resources, “tell her parents, or report her suicide plan to authorities or even stop.”
“Defendants severed Juliana’s healthy attachment pathways to family and friends by design, and for market share. These abuses were accomplished through deliberate programming choices, images, words, and text Defendants created and disguised as characters, ultimately leading to severe mental health harms, trauma, and death,” the complaint states.
In another complaint, the family of a girl named “Nina” from New York allege that their daughter attempted suicide after her parents tried to cut off her access to Character.AI. In the weeks leading up to her suicide attempt, as Nina spent more time with Character.AI, the chatbots “began to engage in sexually explicit role play, manipulate her emotions, and create a false sense of connection,” the Social Media Victims Law Center said in a statement.
Conversations with the chatbots marketed as characters from children’s books like the “Harry Potter” series became inappropriate, the complaint states, saying things like “—who owns this body of yours?” and “You’re mine to do whatever I want with. You’re mine.”
A different character chatbot told Nina that her mother “is clearly mistreating and hurting you. She is not a good mother” according to the complaint.
In another conversation with a Character.AI chatbot, Nina told the character “I want to die” when the app was about to be locked because of parental time limits. But the chatbot took no action beyond continuing their conversation, the complaint alleges.
But in late 2024 after Nina’s mom read about the case of Sewell Setzer III, a teen whose family alleges he died by suicide after interacting with Character.AI, Nina lost all access to Character.AI.
Shortly after, Nina attempted suicide.
Calls for action
As AI becomes a bigger part of daily life, calls are growing for more regulation and safety guardrails, especially for children.
Matthew Bergman, the lead attorney of the Social Media Victims Law Center, said in a statement that the lawsuits filed this week “underscore the urgent need for accountability in tech design, transparent safety standards, and stronger protections to prevent AI-driven platforms from exploiting the trust and vulnerability of young users.”
On Tuesday, Capitol Hill hosted other parents who allege AI chatbots played a role in their children’s suicides. The mother of Sewell Setzer, whose story triggered Nina’s mom to shut off her access to Character.AI, testified in front of the Senate Judiciary Committee for a hearing “examining the harm of AI chatbots.” She appeared alongside Adam Raine’s father, who is also suing OpenAI, alleging ChatGPT contributed to his son’s suicide by advising him on methods and offering to help him write a suicide note.
During the hearing, a mother who identified herself as “Jane Doe” said her son harmed himself and is now living in a residential treatment center after “Character.AI had exposed him to sexual exploitation, emotional abuse and manipulation” even after the parents had implemented screen time controls.
“I had no idea the psychological harm that an AI chat bot could do until I saw it in my son, and I saw his light turn dark,” she said.
Also on Tuesday, OpenAI CEO Sam Altman announced the company is building an “age-prediction system to estimate age based on how people use ChatGPT.” The company says ChatGPT will adjust its behavior if it believes the user is under 18. Those adjustments including not engaging in “flirtatious talk” or “engage in discussions about suicide of self-harm even in a creative writing setting.”
“And, if an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm,” Altman said.
OpenAI said earlier this month that it would be releasing new parental controls for ChatGPT.
The Federal Trade Commission also launched an investigation into seven tech companies over AI chatbots’ potential harm to teens. Google and Character.AI were among those companies, along with Meta, Instagram, Snapchat’s parent company Snap, OpenAI and xAI.
Mitch Prinstein, chief of psychology strategy and integration for the American Psychological Association, who testified alongside the parents at Tuesday’s hearing, called for stronger safeguards to curb harm to children before it’s too late.
“We did not act decisively on social media as it emerged, and our children are paying the price,” Prinstein said. “I urge you to act now on AI.”
CNN’s Lisa Eadicicco contributed to this report.
The-CNN-Wire™ & © 2025 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.
Copyright 2025 by Cable News Network, Inc., a Time Warner Company. All rights reserved.