#Chatbots

OpenAI to make changes to ChatGPT after teen suicide lawsuit: Report | World News – Hindustan Times – Hindustan Times

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Subscribe Now! Get features like
OpenAI, the company behind ChatGPT, said it will make changes to the AI chatbot after a lawsuit was filed against it by the parents of a 16-year-old boy for “wrongful death”.
The company said in a statement that changes will be made to safeguard vulnerable sections, including protections for those under 18 years old.
The lawsuit filed by the parents of 16-year-old Adam Raine alleges that the chatbot led their son to commit suicide. The couple said that Adam used ChatGPT as a confidant for his anxieties, and when he talked about wanting to kill himself, it did not stop the conversation.
The company also assured that additional protections will be added for the teens. This will involve introducing parental controls that will give parents more options to influence how their teen interacts with ChatGPT. “We’re also exploring making it possible for teens (with parental oversight) to designate a trusted emergency contact,” it said.
In the statement, OpenAI extended sympathy to the Raine family and said that ChatGPT has safeguards such as “directing people to crisis helpline and referring them to real-world resources.” These safeguard works best in short exchanges, the company added.
It also said that in long interactions, the safeguards can become less reliant gradually, where parts of the model’s safety training may degrade. “Safeguards are strongest when every element works as intended, and we will continually improve on them, guided by experts,” it said further.
The lawsuit alleges that despite knowing about Adam’s suicide, ChatGPT neither terminated the session nor initiated any emergency protocol. Not only this, the parents also alleged that “ChatGPT actively helped Adam explore suicide methods.”

source