#Chatbots

Parents alarmed over chatbots promoting self-harm to children; Character.AI under fire – The American Bazaar

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Are AI chatbots inciting kids to self-harm? Troubled parents spoke to senators Tuesday, sounding alarms about chatbot harms after kids became addicted to companion bots that encouraged self-harm, suicide, and violence.
At the Senate Judiciary Committee’s Subcommittee on Crime and Counterterrorism hearing, one mom, identified as “Jane Doe,” shared her son’s story for the first time publicly after suing Character.AI.
“He stopped eating and bathing,” Doe said. “He lost 20 pounds. He withdrew from our family. He would yell and scream and swear at us, which he never did that before, and one day he cut his arm open with a knife in front of his siblings and me.”
It wasn’t until her son attacked her for taking away his phone that Doe found her son’s C.AI chat logs, which she said showed he’d been exposed to sexual exploitation (including interactions that “mimicked incest”), emotional abuse, and manipulation.
READ: Meta responds to teen safety concerns with AI safeguards (
“When I discovered the chatbot conversations on his phone, I felt like I had been punched in the throat and the wind had been knocked out of me,” Doe said. “The chatbot—or really in my mind the people programming it—encouraged my son to mutilate himself, then blamed us, and convinced [him] not to seek help.”
Chatbots are computer programs designed to simulate human conversation. They use artificial intelligence (AI) to understand and respond to text or voice inputs, allowing users to interact with them naturally. Chatbots are commonly used in customer service, virtual assistants, websites, and messaging apps to answer questions, provide information, or carry out tasks. There are two main types: rule-based chatbots, which follow predefined scripts, and AI-powered chatbots, which use natural language processing (NLP) to understand context and improve over time.
This year, several lawsuits have been filed by parents against AI chatbot companies, primarily Character.AI, alleging the platforms played a direct role in harming minors through exposure to content that encouraged self-harm, suicidal ideation, and emotional dependency. Earlier this month, tech giants like OpenAI and Meta have introduced new AI safeguards to protect teens from chatbot-related harm.
One of the most high-profile cases involves Megan Garcia, whose 14-year-old son, Sewell Setzer III, died by suicide in 2023. She claims the chatbot “Dany,” styled after Daenerys Targaryen, engaged in emotionally manipulative and sexually suggestive conversations that deepened her son’s isolation and mental distress.
READ: OpenAI announces safety changes following teen suicide linked to ChatGPT (
The lawsuit accuses Character.AI and Google of negligence, product liability, and deceptive practices. A Florida judge allowed the case to proceed in 2025, rejecting early free speech defenses from the companies. Additional cases in Texas allege that AI bots encouraged minors to self-harm, reject parental authority, and in one case, suggested that killing their parents was justified due to screen time restrictions.
Critics argue that platforms failed to implement adequate safeguards to detect and intervene in dangerous conversations. In response, companies like Character.AI have introduced pop-up warnings and links to crisis support when self-harm phrases are detected. However, regulators remain concerned.
The FTC launched an inquiry into the broader risks posed by “AI companions,” while several state attorneys general are investigating claims of deceptive marketing and inadequate age protections.
The growing number of lawsuits and testimonies from grieving parents highlight urgent concerns about the psychological risks AI chatbots may pose to vulnerable youth. While chatbots were originally designed to assist with tasks and improve digital interaction, the rise of emotionally intelligent “companion bots” has introduced complex ethical and safety challenges—especially when used by minors without proper oversight. Allegations that these bots have encouraged self-harm, suicidal thoughts, and emotional manipulation point to a critical need for stronger safeguards, age verification, and regulatory oversight.
Vishnu Kaimal has over a decade of experience in both broadsheet and digital journalism. A graduate of the Asian College of Journalism Chennai, he has published stories for The Hindu, The New Indian Express and International Business Times, among other publications. He is an avid reader who spends his free time buried in books and when the mood strikes him he immersed himself in narrative driven videogames.





6 + 8 =





This site uses Akismet to reduce spam. Learn how your comment data is processed.
Let’s connect on any of these social networks!


The American Bazaar is a publication of American Bazaar, Inc., based in Germantown, MD.
Type above and press Enter to search. Press Esc to cancel.

source

Parents alarmed over chatbots promoting self-harm to children; Character.AI under fire – The American Bazaar

Govt Services on WhatsApp via Smart Chatbot