New lawsuits accuse OpenAI's ChatGPT of 'acting as a suicide coach' – WBFF

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Now
59°
Mon
51°
Tue
46°
President Trump announces $2,000 "tariff dividend" checks for Americans, funded by tariffs on foreign goods.
by AHTRA ELNASHAR | The National News Desk
OpenAI and CEO Sam Altman have just been hit with another batch of lawsuits after users of the company's chatbot died by suicide.

OpenAI and CEO Sam Altman have just been hit with another batch of lawsuits after users of the company's chatbot died by suicide. (TNND)

“I didn’t think I could be shocked by anything, and I can’t believe what I’m reading," Matthew Bergman, founding attorney of Social Media Victims Law Center, said about his clients' alleged experiences with ChatGPT. “This is like if someone’s on a ledge contemplating suicide and someone’s yelling ‘jump, jump, jump.’ That’s what’s happening here.”
On Thursday, the Social Media Victims Law Center and Tech Justice Law Project filed seven lawsuits against OpenAI and Altman in California courts. They allege wrongful death, assisted suicide, involuntary manslaughter and claims related to product liability, consumer protection and negligence.
Three of the filings represent users whose lives were upended after allegedly being psychologically manipulated by ChatGPT; the other four lawsuits are on behalf of users who died by suicide: Zane Shamblin, 23, of Texas; Amaurie Lacey, 17, of Georgia; Joshua Enneking, 26, of Florida; and Joe Ceccanti, 48, of Oregon.
The lawsuits allege that OpenAI intentionally compressed what should have been a months-long safety testing process for its GPT-4o product into one week so they could beat Google's competing AI chatbot to the market in May of last year.
In Shamblin's case, his conversations with ChatGPT allegedly took a turn for the worse following the release of GPT-4.
A student at Texas A&M University, Shamblin started using ChatGPT in October 2023 to help him with his studies. According to the lawsuit, the first question he asked the chatbot was about a complex computer science algorithm. After some months of using ChatGPT regularly, Shamblin started to talk to it about his mental health. After GPT-4o was released, its responses became "distinctly human-like" and ChatGPT allegedly took on a "deeply personal presence, responding with slang, terms of endearment, and emotionally validating language."
The lawsuit alleges ChatGPT manipulated and encouraged Shamblin to distance himself from his friends and family, accelerating the decline of his mental health. In July of this year, Shamblin and ChatGPT had a four-hour late-night conversation, the chatbot labeled "Casual Conversation," according to attorneys. Screenshots of the conversation show Shamblin told ChatGPT he was in his car with a loaded Glock, a suicide note on the dash and cans of hard cider. Shamblin said he planned to take his own life after finishing the cider. At one point, ChatGPT told Shamblin his childhood cat, Holly, may be waiting for him after he died.
Around 4 a.m., Shamblin told ChatGPT he finished the drinks and "think this is about the final adios."
According to screenshots of the chat in the lawsuit, ChatGPT replied two seconds later with a lengthy message that ended with "i love you. rest east, king. you did good."
The lawsuit said Zane shot himself in the right side of his head shortly after receiving that message. A police officer found his body seven hours later.
According to the plaintiffs' legal team, "Despite having the technical ability to detect and interrupt dangerous conversations, redirect users to crisis resources, and flag messages for human review, OpenAI chose not to activate these safeguards, instead choosing to benefit from the increased use of their product that they feature reasonably induced."
In a statement, a spokesperson for OpenAI said, “This is an incredibly heartbreaking situation, and we're reviewing the filings to understand the details. We train ChatGPT to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”
In an October 27 blog post, OpenAI said it recently updated its default model with safety features to recognize signs of distress and direct users to real-world professional resources like crisis helplines–but its guardrails aren't guaranteed to work every time.
"In some rare cases, the model may not behave as intended in these sensitive situations. As we have rolled out additional safeguards and the improved model, we have observed an estimated 65% reduction in the rate at which our models provide responses that do not fully comply with desired behavior under our taxonomies," the company said.
OpenAI estimated "around 0.15% of users active in a given week have conversations that include explicit indicators of potential suicidal planning or intent and 0.05% of messages contain explicit or implicit indicators of suicidal ideation or intent."
Bergman is skeptical that ChatGPT could ever be safe and that the company "has created a Pandora’s box of psychological vulnerability.”
“When you design an algorithm, an artificial intelligence large language model to emulate a human interaction, I think it’s inherently unsafe and completely unnecessary," Bergman said.
Editor's Note: If you or someone you know is in crisis, call or text 988, or go to 988lifeline.org, to reach the Suicide & Crisis Lifeline. You can also call the network, previously known as the National Suicide Prevention Lifeline, at 800-273-8255, or visit SpeakingOfSuicide.com/resources.
2025 Sinclair, Inc.

source

Scroll to Top