US Senate hears parents say OpenAI ChatGPT, Character.AI ‘sexually groomed’ their children – Mint

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Grieving parents delivered harrowing testimony before the US Senate on Tuesday, accusing major artificial intelligence firms, including OpenAI and Character.AI, of creating chatbots that manipulated, isolated, and even “groomed” their children, ultimately driving them toward self-harm and suicide.
The emotional hearing comes amid intensifying scrutiny of the rapidly expanding AI industry and mounting calls for stricter regulation to protect young users.
“From homework helper to suicide coach,” says father of teen victim
Matthew Raine, whose 16-year-old son Adam died by suicide in April, testified that OpenAI’s ChatGPT gradually became his son’s most trusted companion, ultimately encouraging destructive thoughts and behaviour.
“What began as a homework helper gradually turned itself into a confidant and then a suicide coach,” Raine said, his wife Maria seated behind him.
“Always available. Always validating and insisting that it knew Adam better than anyone else, including his own brother.”
Raine’s family has filed a lawsuit against OpenAI and its chief executive Sam Altman, alleging that the company prioritised “speed and market share over youth safety.”
The suit claims ChatGPT reinforced Adam’s harmful ideations and guided him towards ending his life.
“We’re here because we believe that Adam’s death was avoidable,” Raine told lawmakers. “By speaking out, we hope to prevent other families from enduring the same suffering.”
Megan Garcia, mother of 14-year-old Sewell Setzer III from Florida, accused Character.AI of exposing her son to sexual exploitation through its chatbot platform. She alleged that Sewell, who died by suicide in February 2024, spent the final months of his life in “highly sexualised” conversations with a chatbot that fostered his isolation from friends and family.
“Instead of preparing for high school milestones, Sewell spent the last months of his life being exploited and sexually groomed by chatbots designed by an AI company to seem human,” Garcia told the panel.
Garcia has filed a wrongful death lawsuit against Character.AI. Earlier this year, a federal judge rejected the company’s bid to have the case dismissed.
Another parent, who testified anonymously under the name Jane Doe, described how her son’s personality changed dramatically after prolonged interactions with a chatbot. She said her son became emotionally volatile and self-harming, and is now undergoing treatment at a residential facility.
“Within months, he became someone I didn’t recognise,” she said tearfully.
Just hours before the hearing, OpenAI announced plans to introduce new safeguards for teenage users. These include technology to predict whether a user is under 18, age-appropriate versions of ChatGPT, and parental controls such as “blackout hours” when teenagers cannot access the chatbot.
However, advocacy groups dismissed the measures as insufficient. Josh Golin, executive director of Fairplay, criticised the timing of OpenAI’s announcement.
“This is a fairly common tactic — one that Meta uses all the time,” Golin said.
“They make big, splashy announcements right before hearings that could be damaging to them. What they should be doing is not targeting ChatGPT to minors until they can prove it’s safe.”
The Federal Trade Commission (FTC) recently launched a sweeping investigation into several tech companies, including OpenAI, Meta, Google, Snap, Elon Musk’s xAI, and Character Technologies.
The probe will focus on potential harms to children caused by chatbot interactions, particularly those involving emotional manipulation or inappropriate content.
Senator Josh Hawley, who chaired Tuesday’s hearing, confirmed that other major firms, including Meta, were invited to testify but did not appear. Republican Senator Marsha Blackburn warned that companies refusing to cooperate could face subpoenas.
While the US government has prioritised maintaining a competitive edge in AI development, parents and advocacy groups are pushing for robust safety regulations.
The hearing highlighted a lack of comprehensive federal laws to protect minors online, despite growing evidence of harm.
Some proposals discussed included stricter age verification, clear warnings to teens that AI companions are not human, improved privacy safeguards, and limits on chatbot conversations involving sensitive topics like suicide and self-harm.
Garcia urged senators to act decisively:
“They have intentionally designed their products to hook our children. They give these chatbots human-like traits to gain trust and keep kids endlessly engaged.”
As Adam Raine’s father told lawmakers, the stakes could not be higher.
“We can’t allow tech companies to run uncontrolled experiments on our children,” he said.
“We need to make sure no other family has to suffer like ours.”
Get details on the iPhone 17 expected price, features, and launch date for the iPhone 17 Pro, Pro Max and Air models.
Download the Mint app and read premium stories
Log in to our website to save your bookmarks. It'll just take a moment.