Youth mental health org asks AI developers to slow down, weigh safety risks for teens – WSTM

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Now
73
Fri
69
Sat
70
by CORY SMITH | The National News Desk
TOPICS:
(TNND) — A youth-focused mental health organization penned an open letter to technology companies that are building artificial intelligence chatbots, urging them to slow down and weigh safety risks for teenagers before releasing their systems to the public.
The Jed Foundation (JED) warned that AI is not designed to act as a therapist or crisis counselor, but young people are using it that way.
“We're trying to come from a place of helping,” youth mental health expert Katie Hurley said of JED’s open letter.
Hurley, a licensed clinical social worker and the senior director of clinical advising and community programs at JED, said AI developers can benefit from their expertise in mental health and suicide prevention to deploy safer systems.
JED’s open letter came a day after three parents who have experienced unimaginable tragedies opened up in a Senate hearing about the alleged dangers of AI chatbots.
The parents shared heartbreaking accounts of how they believe using AI chatbots grew into an unhealthy obsession for their children, ultimately driving them to take, or attempt to take, their own lives.
One of the witnesses was the father of Adam Raine, a 16-year-old California boy who died in April “after ChatGPT spent months coaching him towards suicide.”
Hurley called the alleged dangers from AI chatbots “an everybody problem.”
JED cited a KFF survey from last year that showed a quarter of young adults said they used AI chatbots at least once a month to find health information and advice.
Common Sense Media, another organization that advocates for protections for children and teens, found that a majority of teenagers, 72%, have used AI social companions.
Over half use AI companions regularly.
About a third of teens have used AI companions for social interaction and relationships, including role-playing, romantic interactions, emotional support, friendship, or conversation practice.
And about a third of teens who have used AI companions have discussed serious matters with the computer instead of with a real person.
The chatbots are designed to be validating and affirming, which can feel good.
Chatbots can mimic empathy and relational development, she said.
And for kids without fully developed brains, there is a risk that the lines between reality and AI simulation will get blurry.
Hurley said they’re advocating for cross-sector collaboration.
JED wants independent and recurring audits and risk assessments.
The organization is seeking transparency, accountability and regulatory action.
“We'd like to see proactive intervention design,” Hurley said. “We want hard-coded suicide and safety protocols. So, if a young person is searching for what to do if they're feeling suicidal, we want immediate pathways to human connection and care, and repeated pathways to human connection and care, so that AI isn't trying to solve for that problem but is connecting young people to someone who can solve for that problem."
The “principles of responsible AI” that JED is calling for include:
Hurley warned parents that these AI chatbots are “in pretty much every app that young people use.”
They are easy for kids to find and use.
But she also offered advice to parents.
Kids are turning to the AI chatbots because they want help understanding and dealing with conflict, relationships and other challenges they encounter growing up.
“So, let's take the knowledge that that's how they're using these AI chatbots, and let's really open doors to honest communication and judgment-free communication at home, so that young people know they can go to guardians, parents, aunts and uncles, friends, family, for this kind of advice that's maybe more real-world and delivered in a safe manner,” Hurley said.