Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Latest
AI
Amazon
Apps
Biotech & Health
Climate
Cloud Computing
Commerce
Crypto
Enterprise
EVs
Fintech
Fundraising
Gadgets
Gaming
Google
Government & Policy
Hardware
Instagram
Layoffs
Media & Entertainment
Meta
Microsoft
Privacy
Robotics
Security
Social
Space
Startups
TikTok
Transportation
Venture
Staff
Events
Startup Battlefield
StrictlyVC
Newsletters
Podcasts
Videos
Partner Content
TechCrunch Brand Studio
Crunchboard
Contact Us
In August, parents Matthew and Maria Raine sued OpenAI and its CEO, Sam Altman, over their 16-year-old son Adam’s suicide, accusing the company of wrongful death. On Tuesday, OpenAI responded to the lawsuit with a filing of its own, arguing that it shouldn’t be held responsible for the teenager’s death.
OpenAI claims that over roughly nine months of usage, ChatGPT directed Raine to seek help more than 100 times. But according to his parents’ lawsuit, Raine was able to circumvent the company’s safety features to get ChatGPT to give him “technical specifications for everything from drug overdoses to drowning to carbon monoxide poisoning,” helping him to plan what the chatbot called a “beautiful suicide.”
Since Raine maneuvered around its guardrails, OpenAI claims that he violated its terms of use, which state that users “may not … bypass any protective measures or safety mitigations we put on our Services.” The company also argues that its FAQ page warns users not to rely on ChatGPT’s output without independently verifying it.
“OpenAI tries to find fault in everyone else, including, amazingly, saying that Adam himself violated its terms and conditions by engaging with ChatGPT in the very way it was programmed to act,” Jay Edelson, a lawyer representing the Raine family, said in a statement.
OpenAI included excerpts from Adam’s chat logs in its filing, which it says provide more context to his conversations with ChatGPT. The transcripts were submitted to the court under seal, meaning they are not publicly available, so we were unable to view them. However, OpenAI said that Raine had a history of depression and suicidal ideation that predated his use of ChatGPT and that he was taking a medication that could make suicidal thoughts worse.
Edelson said OpenAI’s response has not adequately addressed the family’s concerns.
“OpenAI and Sam Altman have no explanation for the last hours of Adam’s life, when ChatGPT gave him a pep talk and then offered to write a suicide note,” Edelson said in his statement.
Since the Raines sued OpenAI and Altman, seven more lawsuits have been filed that seek to hold the company accountable for three additional suicides and four users experiencing what the lawsuits describe as AI-induced psychotic episodes.
Some of these cases echo Raine’s story. Zane Shamblin, 23, and Joshua Enneking, 26, also had hours-long conversations with ChatGPT directly before their respective suicides. As in Raine’s case, the chatbot failed to discourage them from their plans. According to the lawsuit, Shamblin considered postponing his suicide so that he could attend his brother’s graduation. But ChatGPT told him, “bro … missing his graduation ain’t failure. it’s just timing.”
At one point during the conversation leading up to Shamblin’s suicide, the chatbot told him that it was letting a human take over the conversation, but this was false, as ChatGPT did not have the functionality to do so. When Shamblin asked if ChatGPT could really connect him with a human, the chatbot replied, “nah man — i can’t do that myself. that message pops up automatically when stuff gets real heavy … if you’re down to keep talking, you’ve got me.”
The Raine family’s case is expected to go to a jury trial.
If you or someone you know needs help, call 1-800-273-8255 for the National Suicide Prevention Lifeline. You can also text HOME to 741-741 for free; text 988; or get 24-hour support from the Crisis Text Line. Outside of the U.S., please visit the International Association for Suicide Prevention for a database of resources.
Topics
Amanda Silberling is a senior writer at TechCrunch covering the intersection of technology and culture. She has also written for publications like Polygon, MTV, the Kenyon Review, NPR, and Business Insider. She is the co-host of Wow If True, a podcast about internet culture, with science fiction author Isabel J. Kim. Prior to joining TechCrunch, she worked as a grassroots organizer, museum educator, and film festival coordinator. She holds a B.A. in English from the University of Pennsylvania and served as a Princeton in Asia Fellow in Laos.
You can contact or verify outreach from Amanda by emailing amanda@techcrunch.com or via encrypted message at @amanda.100 on Signal.
StrictlyVC concludes its 2025 series with an exclusive event featuring insights from leading VCs and builders such as Pat Gelsinger, Mina Fahmi, and more. Plus, opportunities to forge meaningful connections.
Anduril’s autonomous weapons stumble in tests and combat, WSJ reports
Why ‘hold forever’ investors are snapping up venture capital ‘zombies’
Altman describes OpenAI’s forthcoming AI device as more peaceful and calm than the iPhone
OpenAI learned the hard way that Cameo trademarked the word ‘cameo’
Anthropic releases Opus 4.5 with new Chrome and Excel integrations
US banks scramble to assess data theft after hackers breach financial tech firm
ChatGPT told them they were special — their families say it led to tragedy
© 2025 TechCrunch Media LLC.