Parents of teens who died by suicide after AI chatbot interactions testify to Congress – WTAE

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Parents whose teenagers killed themselves after interactions with artificial intelligence chatbots testified to Congress on Tuesday about the dangers of the technology.
Related video above: New research shows ChatGPT is causing unhealthy interactions with teens
“What began as a homework helper gradually turned itself into a confidant and then a suicide coach,” said Matthew Raine, whose 16-year-old son Adam died in April.
“Within a few months, ChatGPT became Adam’s closest companion,” the father told senators. “Always available. Always validating and insisting that it knew Adam better than anyone else, including his own brother.”
Raine’s family sued OpenAI and its CEO, Sam Altman, last month, alleging that ChatGPT coached the boy in planning to take his own life.
Also testifying Tuesday was Megan Garcia, the mother of 14-year-old Sewell Setzer III of Florida.
Garcia sued another AI company, Character Technologies, for wrongful death last year, arguing that before his suicide, Sewell had become increasingly isolated from his real life as he engaged in highly sexualized conversations with the chatbot.
Hours before the Senate hearing, OpenAI pledged to roll out new safeguards for teens, including efforts to detect whether ChatGPT users are under 18 and controls that enable parents to set “blackout hours” when a teen can’t use ChatGPT. Child advocacy groups criticized the announcement as not enough.
“This is a fairly common tactic — it’s one that Meta uses all the time — which is to make a big, splashy announcement right on the eve of a hearing which promises to be damaging to the company,” said Josh Golin, executive director of Fairplay, a group advocating for children’s online safety.
“What they should be doing is not targeting ChatGPT to minors until they can prove that it’s safe for them,” Golin said. “We shouldn’t allow companies, just because they have tremendous resources, to perform uncontrolled experiments on kids when the implications for their development can be so vast and far-reaching.”
The Federal Trade Commission said last week it had launched an inquiry into several companies about the potential harms to children and teenagers who use their AI chatbots as companions.
The agency sent letters to Character, Meta and OpenAI, as well as to Google, Snap and xAI.
In the U.S., more than 70% of teens have used AI chatbots for companionship, and half use them regularly, according to a recent study from Common Sense Media, a group that studies and advocates for using digital media sensibly.
Robbie Torney, the group’s director of AI programs, was also set to testify Tuesday, as was an expert with the American Psychological Association.
The association issued a health advisory in June on adolescents’ use of AI that urged technology companies to “prioritize features that prevent exploitation, manipulation, and the erosion of real-world relationships, including those with parents and caregivers.”
If you or someone you know needs help, you can talk with the Suicide and Crisis Lifeline by calling or sending a text message to 988, or you can chat online here.
Hearst Television participates in various affiliate marketing programs, which means we may get paid commissions on editorially chosen products purchased through our links to retailer sites.