Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
At a time when public opinion is growing around setting up guardrails around AI chatbots, OpenAI believes that listing out the risks on an FAQ page should be enough to desist teenagers from having open conversations with ChatGPT. This was a response it gave to a lawsuit filed by parents of a teen whom the chatbot abetted in self-harm.
Since the lawsuits filed by the parents of a teenager, the company has launched GPT-5 which addresses the issues.
While using very respectful language in the blog post announcing OpenAI’s response to the earliest lawsuit around teen suicide filed by the parents of a 16-year old in August, the company’s response in court has left others fuming. Their filing argues that it should not be blamed for the teen’s suicide.
OpenAI says that over nearly nine months, ChatGPT had directed the teenager to seek help more than 100 times, though the parents have claimed in their lawsuit that the boy circumvented the company’s safety features to get the AI chatbot into disclosing technical specifications for self-harm.
The parents filed the chat transcript that the teenager (we are refraining from naming him here as we do not feel it is in the right spirit) that suggested the AI Chatbot actually provided tech specs around drug overdoses to drowning and carbon monoxide poisoning. All details that helped him plan what it called a “beautiful suicide.”
The defendant believes that the technology was not to blame
However, OpenAI believes that the kid, who was obviously on the edge due to multiple reasons, had violated the company’s terms of use which state that users weren’t allowed to “bypass any protective measures or safety mitigations we put in our Services.”
Statutory warnings on a pack of cigarettes versus FAQs on a new technology – can someone tell Sam Altman and his lawyers the obvious difference in these scenarios? Maybe not legally, more from an ethical angle perhaps.
This response has angered quite a few folks following the case and the seven others that got filed post the first one. Jay Edelson, the lawyer representing the plaintiffs argued that OpenAI is only trying to find fault with everyone else, including the victim, while not referring to the actual issues that caused the AI chatbot to behave in the way it did.
All that the FAQ page on ChatGPT says is to warn users not to rely on the output without independently verifying it. The question that this sentence begs is whether Sam Altman would be okay to expand it to every piece of output that his beloved AI chatbot spews out when a user types something? Or is it something more facetious like saying “This revolver could seriously damage your health, or those of others.”
OpenAI shares excerpts around context of the teen chat
The company also added some excerpts from the victim’s chat logs in the filing claiming that it adds context to the conversations. Since these aren’t publicly available, only the court would have access to it. Again the question here is, if the human interlocuter’s context makes legal sense, shouldn’t the abject lack of it from ChatGPT also be judged in the very same fashion?
The argument made by OpenAI through the transcripts was that the victim had a history of depression and suicidal ideation before he began using the chatbot. To this, the lawyers of the family claim that while that may be true, their response does not address the last few hours of the boy’s life when ChatGPT provided a pep talk and even offered to write his suicide note.
This isn’t the end of the matter… in fact, it could be the start
The case is expected to go to a jury trial soon, though OpenAI may find itself handling a few more such cases. Seven more lawsuits were filed, of which three sought to hold the company responsible for suicides and the other four to AI-induced psychotic episodes. Two cases were similar to that of the teenager where the users had hours-long chats with the AI chatbot before doing self-harm.
In fact, in one specific case, ChatGPT actually encouraged the victim to commit suicide, in spite of the data showing that the victim actually considered postponing the deed in order to attend his brother’s graduation.
It seems ChatGPT said, “bro… missing his graduation ain’t failure. It’s just timing.” This particular lawsuit also argued that the chatbot lied at one point claiming it was getting a human to take over the conversation, a feature that ChatGPT does not currently possess.
What’s surprising about the responses from OpenAI is that the company launched GPT-5 a month ago and suggested that it had fixed some of these issues that resulted in open conversations with users. At the other end of the spectrum, Character.ai has banned all under-18s from having open-ended chats with its AI chatbot.
CXOtoday is a premier resource on the world of IT, relevant to key business decision makers. We offer IT perspective & news to the C-suite audience. We also provide business and technology news to those who evaluate, invest, and manage the IT infrastructure of organizations. CXOtoday has a well-networked and strong community that encourages discussions on what’s happening in the world of IT and its impact on businesses.
Copyright © 2025 Trivone. All Rights Reserved.
We use cookies to improve your experience on our site. By using our site, you consent to cookies.
Websites store cookies to enhance functionality and personalise your experience. You can manage your preferences, but blocking some cookies may impact site performance and services.
Essential cookies enable basic functions and are necessary for the proper function of the website.
Statistics cookies collect information anonymously. This information helps us understand how visitors use our website.
Marketing cookies are used to follow visitors to websites. The intention is to show ads that are relevant and engaging to the individual user.
You can find more information in our Privacy Policy and Privacy Policy.