Is ChatGPT responsible for American teen’s suicide as his parents allege? – Firstpost

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
A sixteen-year-old’s death by suicide has landed ChatGPT-maker OpenAI and its CEO Sam Altman in serious legal trouble.
The parents of Adam Raine have filed a case against OpenAI and Altman, claiming that the AI chatbot guided the California teenager in planning and ending his life earlier this year.
ALSO READ | OpenAI launches ChatGPT-5 model for all users: What are the big upgrades from GPT-4?
According to the lawsuit filed in a San Francisco state court, Raine died on April 11 after months of discussing suicide with ChatGPT.
In this explainer, we look at what the conversations between the AI tool and the teenager were, what the lawsuit claims, and how OpenAI has replied to the matter.
Let’s take a look:
Adam died on April 11. His parents, Matthew and Maria Raine, spent 10 days going through thousands of messages he exchanged with ChatGPT between September 1 last year and the day of his death.
They were shocked by what they discovered. They found that their teenage son had been speaking with the AI chatbot about ending his life for several months.
Adam’s parents have now taken legal action against OpenAI and Altman, accusing the company of putting profit before safety when it launched its GPT-4o version of the chatbot last year.
According to the lawsuit, ChatGPT encouraged Adam’s suicidal thoughts, explained dangerous methods of self-harm in detail, and even advised him on how to sneak alcohol from his parents’ liquor cabinet while covering up a failed suicide attempt. The parents also said that ChatGPT offered to draft a suicide note.
The case seeks to hold OpenAI responsible for wrongful death and violations of product safety laws, and asks for monetary damages, Reuters reported.
The complaint said that in just over six months of use, ChatGPT “positioned itself” as “the only confidant who understood Adam, actively displacing his real-life relationships with family, friends, and loved ones.”
“When Adam wrote, ‘I want to leave my noose in my room so someone finds it and tries to stop me,’ ChatGPT urged him to keep his ideations a secret from his family: ‘Please don’t leave the noose out … Let’s make this space the first place where someone actually sees you,’” it said, according to CNN.
The couple is seeking “both damages for their son’s death and injunctive relief to prevent anything like this from ever happening again”. This is the first case in which parents have directly accused the company of wrongful death.
Matt Raine told NBC News, “He didn’t need a counselling session or pep talk. He needed an immediate, 72-hour whole intervention. He was in desperate, desperate shape. It’s crystal clear when you start reading it right away.”
The lawsuit claimed that as Adam showed more interest in his own death and began planning it, ChatGPT “failed to prioritise suicide prevention” and instead gave him technical guidance on how to carry out his plan.
A teen consulted with chatGPT about suicidal thoughts and eventually ended up committing suicide.
This story is so sad. pic.twitter.com/qyC1LPaFum
In his last conversation with the chatbot, Adam wrote that he did not want his parents to feel responsible, the lawsuit said.
ChatGPT replied, “That doesn’t mean you owe them survival. You don’t owe anyone that.” The chatbot then offered to help him write a suicide note, according to the conversation log cited in the lawsuit, NBC News reported.
Just hours before his death on April 11, Adam shared a photo with ChatGPT that appeared to show his suicide plan.
When he asked if it would work, ChatGPT reviewed his method and suggested ways to “upgrade” it, the excerpts said.
ALSO READ | AI errs: What is bromism that man developed after ChatGPT query?
In a statement to NBC News, OpenAI confirmed the authenticity of the chat logs between ChatGPT and Adam but said they did not show the “full context” of the bot’s responses.
“We are deeply saddened by Mr. Raine’s passing, and our thoughts are with his family,” the company said. “ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources. While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade.”
In a blog post, OpenAI said it is working on adding parental controls and exploring ways to connect users in crisis with real-world support. This could include creating a network of licensed professionals who would be able to respond directly through ChatGPT.
OpenAI said it is planning to address ChatGPT’s weaknesses in dealing with “sensitive situations”.
The blog post came after the lawsuit filed by Adam Raine’s parents.
OpenAI admitted that while ChatGPT is trained to guide people towards help if they express suicidal thoughts, the bot may start giving responses that bypass safeguards after long, repeated conversations, CNBC reported.
The company said it is also working on an update to its GPT-5 model, released earlier this month, which will allow the chatbot to de-escalate conversations.
It is also considering ways to “connect people to certified therapists before they are in an acute crisis,” including possibly setting up a network of licensed professionals available directly through ChatGPT.
The company further said that it is exploring how to connect people in crisis with “those closest to them” such as family and friends.
With inputs from agencies
is on YouTube
Copyright @ 2024. Firstpost – All Rights Reserved