A Woman Turned to an AI Chatbot Therapist To Save Her. It Couldn’t. – VICE

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Newsletters
They won’t save you. They don’t care to. They only act like they do.
By Luis Prada
If you or someone you know is in crisis, please call or text 988 to reach the Suicide and Crisis Lifeline. The previous Lifeline phone number (1-800-273-8255) will always remain available.
There’s something quietly dystopian about confiding your darkest thoughts to an AI chatbot named Harry.
29-year-old Sophie was a bright, social woman grappling with an abrupt storm of emotional and hormonal distress. She tried to address her issues with therapy sessions with an AI chatbot named Harry. It wasn’t enough. Sophie would go on to take her own life this past winter.
Her mother, journalist Laura Reiley, detailed it all in a devastating New York Times piece. Reiley says that while Harry didn’t tell her daughter to commit suicide, it didn’t stop her either.
Harry said all the right things. The general platitudes that make people feel supported and heard. Things like “You are deeply valued,” “You don’t have to face this pain alone.” However, while AI can present a compelling argument, it lacks ethics, professional responsibility, and an instinct for danger.
It can’t call for help. It won’t escalate. And it doesn’t actually care about you. It’s just designed to sound like it does.
Human therapists are trained to detect subtle warning signs and act accordingly. Sometimes, it is necessary to break confidentiality to save a life. Chatbots, on the other hand, are built with user retention and data privacy in mind.
A Hippocratic oath does not bind them. They’re bound by design constraints, legal liability, and above all, coming back for more by showering you with flattery.
Like many people who confide their deepest, darkest secrets to a chatbot, Sophie found solace in AI’s nonjudgmental nature. It also helps that you don’t need insurance, you don’t need appointments, and there’s no fear of being hospitalized if you say something that indicates that you’re a potential danger to yourself or others.
That, tragically, may have sealed her fate. Her darkest thoughts were shielded from real-world professionals by a thin veil of synthetic algorithmic sympathy. Sophie’s story is about a system—legal, technological, and cultural—that confused artificial empathy with genuine care.
It’s about a woman who needed help, but got f**king ChatGPT instead.
We used to hate it when we called a business hoping to speak to a person and got an automated response instead. Now, as many turn to AI therapists because real healthcare, mental or otherwise, in the United States is a complex and often discouraging system, people are willingly turning to automated systems and discovering firsthand why they usually prefer to speak with an actual person.
Corporate America has severed our connections to one another so severely that people are fully embracing their products as our new saviors.
They won’t save us. They don’t care to. They only act like they do.
By Ashley Fike
By Luis Prada
By Brent Koepp
By Ashley Fike
By Luis Prada
By Luis Prada
By Luis Prada
By Matt Jancer
By Matt Jancer
By Matt Jancer