AI Safety Expert Warns Parents to Watch Kids in Wake of Chatbot Ban – KQED

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Stay on top of what’s happening in the Bay Area with essential Bay Area news stories, sent to your inbox every weekday.
Where conversation and cultura meet.
See Senior Director of TV Programming Meredith Speight’s recommendations from this month’s KQED 9, PLUS and Passport schedules.
Watch recordings of recent KQED Live events.
Support KQED by using your donor-advised fund to make a charitable gift.
Stay on top of what’s happening in the Bay Area with essential Bay Area news stories, sent to your inbox every weekday.
Where conversation and cultura meet.
See Senior Director of TV Programming Meredith Speight’s recommendations from this month’s KQED 9, PLUS and Passport schedules.
Watch recordings of recent KQED Live events.
Support KQED by using your donor-advised fund to make a charitable gift.
Please try again

A leading artificial intelligence researcher is warning that Character.AI’s plan to ban chatbots for kids by late November may leave them susceptible to self-harm or suicide if they detach from an AI companion too quickly.
Jodi Halpern, a UC Berkeley bioethics professor, celebrated the ban overall, but wants parents to be on the lookout for emotional changes or needs in the weeks following children’s separation from their chatbots.
“Parents do not realize that their kids love these bots and that they might feel like their best friend just died or their boyfriend just died,” Halpern said. “Seeing how deep these attachments are and aware that at least some suicidal behavior has been associated with the abrupt loss, I want parents to know that it could be a vulnerable time.”

Character.AI announced its decision to disable chatbots for kids in late October, in response to political pressure and news reports of teens who had become suicidal after prolonged use.
One of those teens, a 14-year-old boy from Florida, fell in love with his chatbot and spent days on end confiding in it and exchanging sexual fantasies. When his mother took away his phone as punishment for misbehaving at school, the boy became despondent, a state his mother interpreted after his death as a blend of withdrawal and grief.
Character.AI is taking care to roll out the ban slowly, according to company spokesperson Cassie Lawrence. The company consulted with experts in teen online safety, has limited the hours per day kids can spend chatting ahead of the termination, and offered them lists of alternative teen forums and mental health resources.
“We have widely announced the forthcoming changes to our users, in a variety of channels, including through our app/website, on our blog, in our help center, and in user forums on Reddit and Discord, so that affected users would have time to adjust to this new paradigm,” Lawrence said in a statement.
Still, Halpern is concerned enough about the risks teens might face once the ban is completed on Nov. 25 that she asked the California Department of Public Health to issue a public service announcement warning parents to watch their kids for mental health needs in the weeks after.
The department did not respond to requests for comment or indicate whether it would issue a warning or not.
Other youth advocates see a role for schools and educators to start discussions about chatbots, as many parents are unaware their children have been using them at all, said Robbie Torney, a senior director at Common Sense Media, a nonprofit that conducts AI research, risk assessment, and education.
Their polling shows nearly three out of four teens said they have used an AI chatbot, about half used one regularly, and a third said they prefer to talk to a chatbot rather than a human being.
Torney argued Character.AI should be doing much more to prepare young people and their parents for the upcoming phaseout. While the time limits are better than cold turkey, he argued that a more gradual weaning process would be safer.
The company should be taking more proactive steps to connect kids in distress to real-life mental health clinicians or telehealth appointments, he added, and should at least provide educational resources for parents on how to recognize if their child is developing a chatbot dependence and how to talk to them about it.
“Character.AI built this problem and now they’re pulling the plug without taking responsibility for the harm they’ve caused or providing support for the withdrawal they’ve created,” Torney said.
*An Associated Press photo caption in an earlier version of this story incorrectly identified Character.AI as the company generating an AI companion. 

To learn more about how we use your information, please read our privacy policy.

source

Scroll to Top