An AI chatbot pushed a teen to end his life, a lawsuit against its creator alleges – AFRO American Newspapers

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Sign up to our free newsletter to get the latest news delivered straight to your inbox.
AFRO American Newspapers
The Black Media Authority
Your contribution is appreciated.
By Kate Payne
The Associated Press
EDITOR’S NOTE — This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.
In the final moments before he took his own life, 14-year-old Sewell Setzer III took out his phone and messaged the chatbot that had become his closest friend.
Sign up for our Daily eBlast to get coverage on Black communities from the media company who has been doing it right for over 132 years.
For months, Sewell had become increasingly isolated from his real life as he engaged in highly sexualized conversations with the bot, according to a wrongful death lawsuit filed in a federal court in Orlando this week.
The legal filing states that the teen openly discussed his suicidal thoughts and shared his wishes for a pain-free death with the bot, named after the fictional character Daenerys Targaryen from the television show “Game of Thrones.”
On Feb. 28, Sewell told the bot he was ‘coming home’ — and it encouraged him to do so, the lawsuit says.
“I promise I will come home to you. I love you so much, Dany,” Sewell told the chatbot.
“I love you too,” the bot replied. “Please come home to me as soon as possible, my love.”
“What if I told you I could come home right now?” he asked.
“Please do, my sweet king,” the bot messaged back.
Just seconds after the Character.AI bot told him to “come home,” the teen shot himself, according to the lawsuit, filed this week by Sewell’s mother, Megan Garcia, of Orlando, against Character Technologies Inc.
Character Technologies is the company behind Character.AI, an app that allows users to create customizable characters or interact with those generated by others, spanning experiences from imaginative play to mock job interviews. The company says the artificial personas are designed to “feel alive” and “human-like.”
“Imagine speaking to super intelligent and life-like chat bot characters that hear you, understand you and remember you,” reads a description for the app on Google Play. “We encourage you to push the frontier of what’s possible with this innovative technology.”
Garcia’s attorneys allege the company engineered a highly addictive and dangerous product targeted specifically to kids, “actively exploiting and abusing those children as a matter of product design,” and pulling Sewell into an emotionally and sexually abusive relationship that led to his suicide.
“We believe that if Sewell Setzer had not been on Character.AI, he would be alive today,” said Matthew Bergman, founder of the Social Media Victims Law Center, which is representing Garcia.
A spokesperson for Character.AI said Oct.25 that the company doesn’t comment on pending litigation. In a blog post published the day the lawsuit was filed, the platform announced new “community safety updates,” including guardrails for children and suicide prevention resources.
“We are creating a different experience for users under 18 that includes a more stringent model to reduce the likelihood of encountering sensitive or suggestive content,” the company said in a statement to The Associated Press. “We are working quickly to implement those changes for younger users.”
Google and its parent company, Alphabet, have also been named as defendants in the lawsuit. According to legal filings, the founders of Character.AI are former Google employees who were “instrumental” in AI development at the company, but left to launch their own startup to “maximally accelerate” the technology.
In August, Google struck a $2.7 billion deal with Character.AI to license the company’s technology and rehire the startup’s founders, the lawsuit claims. The AP left multiple email messages with Google and Alphabet on Oct. 25.
In the months leading up to his death, Garcia’s lawsuit says, Sewell felt he had fallen in love with the bot.
While unhealthy attachments to AI chatbots can cause problems for adults, for young people it can be even riskier — as with social media — because their brain is not fully developed when it comes to things such as impulse control and understanding the consequences of their actions, experts say.
Youth mental health has reached crisis levels in recent years, according to U.S. Surgeon General Vivek Murthy, who has warned of the serious health risks of social disconnection and isolation — trends he says are made worse by young people’s near universal use of social media.
Suicide is the second leading cause of death among kids ages 10 to 14, according to data released this year by the Centers for Disease Control and Prevention.
James Steyer, the founder and CEO of the nonprofit Common Sense Media, said the lawsuit “underscores the growing influence — and severe harm — that generative AI chatbot companions can have on the lives of young people when there are no guardrails in place.”
Kids’ overreliance on AI companions, he added, can have significant effects on grades, friends, sleep and stress, “all the way up to the extreme tragedy in this case.”
“This lawsuit serves as a wake-up call for parents, who should be vigilant about how their children interact with these technologies,” Steyer said.
Common Sense Media, which issues guides for parents and educators on responsible technology use, says it is critical that parents talk openly to their kids about the risks of AI chatbots and monitor their interactions.
“Chatbots are not licensed therapists or best friends, even though that’s how they are packaged and marketed, and parents should be cautious of letting their children place too much trust in them,” Steyer said.
132 years ago we were covering Post-Reconstruction when a former enslaved veteran started the AFRO with $200 from his land-owning wife. In 2022 we endorsed Maryland’s first Black Governor, Wes Moore. And now we celebrate the first Black Senator from Maryland, Angela Alsobrooks!
Your contribution is appreciated.
The AFRO knows what it’s like to endure challenging times. John H. Murphy, Sr., a former enslaved man founded the AFRO in 1892 with $200 from his wife, Martha Howard Murphy. Together they created a platform to offer images and stories of hope to advance their community. The AFRO provides readers with good news about the Black community not otherwise found.
Learn More
Baltimore, MD Office
233 E. Redwood Street Suite 600G
Baltimore, MD 21202
Washington, DC Office
Uline Arena Building
1140 3rd St., 2nd Flr, NE
Washington, DC 20002
Voice: 410-554-8200
Email: customerservice@afro.com
Sign in by entering the code we sent to , or clicking the magic link in the email.
By signing up, you agree to our Terms and Conditions. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.