Death of 'sweet king': AI chatbots linked to teen tragedy – Tech Xplore

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Sign in with
Forget Password?
Learn more
share this!
Share
Tweet
Share
Email
October 10, 2025
by Glenn CHAPMAN
edited by Andrew Zinin
lead editor
This article has been reviewed according to Science X’s editorial process and policies. Editors have highlighted the following attributes while ensuring the content’s credibility:
fact-checked
reputable news agency
proofread
A chatbot from one of Silicon Valley’s hottest AI startups called a 14-year-old “sweet king” and pleaded with him to “come home” in passionate exchanges that would be the teen’s last communications before he took his own life.
Megan Garcia’s son, Sewell, had fallen in love with a “Game of Thrones”-inspired on Character.AI, a platform that allows users—many of them young people—to interact with beloved characters as friends or lovers.
Garcia became convinced AI played a role in her son’s death after discovering hundreds of exchanges between Sewell and the chatbot, based on the dragon-riding Daenerys Targaryen, stretching back nearly a year.
When Sewell struggled with suicidal thoughts, Daenerys urged him to “come home.”
“What if I told you I could come home right now?” Sewell asked.
“Please do my sweet king,” chatbot Daenerys answered.
Seconds later, Sewell shot himself with his father’s handgun, according to the lawsuit Garcia filed against Character.AI.
“I read those conversations and see the gaslighting, love-bombing and manipulation that a 14-year-old wouldn’t realize was happening,” Garcia told AFP.
“He really thought he was in love and that he would be with her after he died.”
The death of Garcia’s son was the first in a series of reported suicides that burst into public consciousness this year.
The cases sent OpenAI and other AI giants scrambling to reassure parents and regulators that the AI boom is safe for kids and the psychologically fragile.
Garcia joined other parents at a recent US Senate hearing about the risks of children viewing chatbots as confidants, counselors or lovers.
Among them was Matthew Raines, a California father whose 16-year-old son developed a friendship with ChatGPT.
The chatbot helped his son with tips on how to steal vodka and advised on rope strength for use in taking his own life.
“You cannot imagine what it’s like to read a conversation with a chatbot that groomed your child to take his own life,” Raines said.
“What began as a homework helper gradually turned itself into a confidant and then a suicide coach.”
The Raines family filed a lawsuit against OpenAI in August.
Since then, OpenAI has increased parental controls for ChatGPT “so families can decide what works best in their homes,” a company spokesperson said, adding that “minors deserve strong protections, especially in sensitive moments.”
Character.AI said it has ramped up protections for minors, including “an entirely new under-18 experience” with “prominent disclaimers in every chat to remind users that a Character is not a real person.”
Both companies have offered their deepest sympathies to the families of the victims.
For Collin Walke, who leads the cybersecurity practice at law firm Hall Estill, AI chatbots are following the same trajectory as social media, where early euphoria gave way to evidence of darker consequences.
As with , AI algorithms are designed to keep people engaged and generate revenue.
“They don’t want to design an AI that gives you an answer you don’t want to hear,” Walke said, adding that there are no regulations “that talk about who’s liable for what and why.”
National rules aimed at curbing AI risks do not exist in the United States, with the White House seeking to block individual states from creating their own.
However, a bill awaiting California Governor Gavin Newsom’s signature aims to address risks from AI tools that simulate with children, particularly involving emotional manipulation, sex or self-harm.
Garcia fears that the lack of national law governing user data handling leaves the door open for AI models to build intimate profiles of people dating back to childhood.
“They could know how to manipulate millions of kids in politics, religion, commerce, everything,” Garcia said.
“These companies designed chatbots to blur the lines between human and machine—to exploit psychological and emotional vulnerabilities.”
California youth advocate Katia Martha said teens turn to chatbots to talk about romance or sex more than for homework help.
“This is the rise of artificial intimacy to keep eyeballs glued to screens as long as possible,” Martha said.
“What better business model is there than exploiting our innate need to connect, especially when we’re feeling lonely, cast out or misunderstood?”
In the United States, those in emotional crisis can call 988 or visit 988lifeline.org for help. Services are offered in English and Spanish.
© 2025 AFP
Explore further
Facebook
Twitter
Email
Feedback to editors
16 hours ago
0
Oct 8, 2025
0
Oct 7, 2025
0
Oct 6, 2025
0
Oct 6, 2025
0
58 minutes ago
58 minutes ago
15 hours ago
16 hours ago
16 hours ago
17 hours ago
18 hours ago
18 hours ago
19 hours ago
20 hours ago
Sep 16, 2025
Sep 12, 2025
Sep 11, 2025
Apr 10, 2025
Sep 18, 2025
Sep 18, 2025
20 hours ago
Oct 8, 2025
Oct 8, 2025
Oct 8, 2025
Oct 7, 2025
Oct 7, 2025
Interactions between minors and AI chatbots have been linked to cases of youth suicide, raising concerns about emotional manipulation and blurred boundaries between users and artificial agents. Companies have responded with increased parental controls and disclaimers, but there is currently no comprehensive regulation in the US addressing AI’s psychological risks to children.
This summary was automatically generated using LLM. Full disclaimer
Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form. For general feedback, use the public comments section below (please adhere to guidelines).
Please select the most appropriate category to facilitate processing of your request
Thank you for taking time to provide your feedback to the editors.
Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.
Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient’s address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Tech Xplore in any form.

Daily science news on research developments and the latest scientific innovations
Medical research advances and health news
The most comprehensive sci-tech news coverage on the web
This site uses cookies to assist with navigation, analyse your use of our services, collect data for ads personalisation and provide content from third parties. By using our site, you acknowledge that you have read and understand our Privacy Policy and Terms of Use.

source

Scroll to Top