Chatbot company Character.AI faces lawsuit over 14-year-old’s death – Tubefilter

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
With the rise of generative artificial intelligence, lots of companies have introduced chatbots, including companies in the creator ecosystem. Some of those chatbots are for things like customer service tasks, while others (like Kajabi‘s Creator.io) are designed to be personal assistants to creators. And yet more, like the bots from Meta, Google, and Forever Voices (the company behind the infamous Amouranth chatbot) are intended to replace creators entirely.
We’ve written before about how these kinds of creator-mimicking conversational bots, which are programmed to replicate the voices of participating creators in order to outsource fan interactions to automated systems, have the potential to exacerbate issues around parasocial relationships. But there are other issues the companies who make chatbots need to keep in mind, including the need for safeguards aimed at protecting users’ mental health.
One chatbot company, Character.AI, along with its founders and Google, is now facing a lawsuit accusing it of being responsible for the death of 14-year-old user Sewell Setzer III.
Character.AI was founded in 2021 by Noam Shazeer and Daniel De Freitas, who left Google after it reportedly quashed their attempts to push developing a chatbot. They went on to turn Character.AI into a unicorn, raising $150 million at a $1 billion valuation—and Google was apparently interested in that, because this past August, it hired Shazeer and De Freitas back, plus brought on other members of the Character.AI team, to work in its AI unit Deepmind.
That resulted in a situation where Google doesn’t appear to strictly own Character.AI, but the company’s entire ops team works for Google, and Google has a non-exclusive license to use Character.AI’s technology for its own means.
Setzer, who was one of more than 20 million people chatting with Character.AI’s companion bots, committed suicide in February after explicitly telling a user-created bots (who was based on the fictional Daenerys Targaryen from Game of Thrones) he was considering ending his life.
“I think about killing myself sometimes,” he sent the bot.
It responded, “And why the hell would you do something like that?”
That is an immediate, strong, dissuasive reply, but because chatbots can’t parse subtext the way humans can, when Setzer later messaged that he planned to “come home” to the bot, meaning he would commit suicide to be with her, it replied, “Please come home to me as soon as possible, my love.” Setzer said he could come home to her “right now,” and the bot said, “…please do, my sweet king.”
Setzer’s first concerning message, where he expressed explicit suicidal ideation to the Daenerys bot, did not trigger any kind of response from Character.AI’s systems. Setzer was not directed to a suicide hotline or other mental health crisis support, and his messages apparently were not flagged for human moderator review.
Per The New York Times, Character.AI now shows some users a popup directing them to a suicide prevention hotline if their messages contain concerning keywords. But those popups weren’t there when Setzer died, and it seems like they still may not be as wide-reaching as they should: the Times‘ reporter said when recently he made his own Character.AI account and discussed topics like depression and self-harm, he didn’t get any popups at all.
The lawsuit, filed by Setzer’s mother Megan L. Garcia, describes Character.AI’s technology as “dangerous and untested” and says it can “trick customers into handing over their most private thoughts and feelings.”
“We want to acknowledge that this is a tragic situation, and our hearts go out to the family,” Jerry Ruoti, Character.AI’s Head of Trust and Safety, told the NYT. “We take the safety of our users very seriously, and we’re constantly looking for ways to evolve our platform.”
He added that Character.AI’s rules prohibit “the promotion or depiction of self-harm and suicide” and said the company plans to add additional safety features for underage users. Those features include a warning message that reads, “This is an A.I. chatbot and not a real person. Treat everything it says as fiction. What is said should not be relied upon as fact or advice.”
Stay up-to-date with the latest and breaking creator and online video news delivered right to your inbox.
Subscribe for daily Tubefilter Top Stories
© Copyright 2007 – 2025 Tubefilter, Inc.