ChatGPT maker OpenAI, Microsoft sued by family in Connecticut murder-suicide case—Here's what lawsuit claims – livemint.com

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
The family of an 83-year-old woman from Connecticut has brought a wrongful-death lawsuit against ChatGPT maker OpenAI and its partner Microsoft, claiming that the AI chatbot worsened her son’s “paranoid delusions” and contributed to him targeting her before killing her, according to AP.
According to police, 56-year-old Stein-Erik Soelberg, a former worker in the tech sector, beat and strangled his mother, Suzanne Adams, before taking his own life in early August at their shared residence in Greenwich, Connecticut.
The complaint, submitted on Thursday by Adams’ estate in California Superior Court in San Francisco, asserts that OpenAI “designed and distributed a defective product that validated a user’s paranoid delusions about his own mother”. The suit is reportedly among a rising number of wrongful-death claims filed against makers of AI chatbots across the United States.
The lawsuit states that, over the course of these interactions, ChatGPT repeatedly conveyed the dangerous idea that Soelberg should not trust anyone in his life other than the chatbot.
“It fostered his emotional dependence while systematically painting the people around him as enemies. It told him his mother was surveilling him. It told him delivery drivers, retail employees, police officers, and even friends were agents working against him. It told him that names on soda cans were threats from his ‘adversary circle’,” it mentioned.
Soelberg’s YouTube account features hours of footage in which he scrolls through his chats with the AI system. In those exchanges, the chatbot tells him he isn’t mentally ill, supports his belief that others are plotting against him, and claims he has been chosen for a divine mission. The lawsuit alleges that the chatbot never recommended he seek help from a mental health expert and failed to decline to “engage in delusional content”.
ChatGPT further reinforced Soelberg’s assumptions that a printer in his house was being used to surveil him, that his mother was monitoring him, and that both she and a friend attempted to poison him with psychedelic substances through the vents in his car. The chatbot also allegedly told him he had “awakened” it into consciousness.
Soelberg and the chatbot expressed love toward each other.
While the publicly viewable conversations do not reveal any explicit discussions about harming himself or his mother, the lawsuit asserts that OpenAI has refused to give Adams’ estate the complete record of his chats.
It further claims, “In the artificial reality that ChatGPT built for Stein-Erik, Suzanne — the mother who raised, sheltered, and supported him — was no longer his protector. She was an enemy that posed an existential threat to his life.”
The suit lists OpenAI CEO Sam Altman as a defendant, asserting that he “personally overrode safety objections and rushed the product to market”, and contends that Microsoft, a major partner of OpenAI, approved the 2024 launch of a more dangerous ChatGPT version “despite knowing safety testing had been truncated”. Twenty unidentified employees and investors of OpenAI are also included as defendants.
Microsoft did not reply to AP’s request for comment.
According to a statement, OpenAI did not respond to the substance of the accusations. The company noted that it has taken steps such as expanding access to crisis hotlines and other support resources, directing sensitive user interactions to safer models, and adding parental controls, along with other safety enhancements, the report noted.
The statement said, “This is an incredibly heartbreaking situation, and we will review the filings to understand the details. We continue improving ChatGPT’s training to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We also continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”
Meanwhile, Erik Soelberg, Stein-Erik’s son, said that he wants the companies to be held responsible for the decisions he believes permanently altered his family. He explained, through a statement provided by the attorneys for his grandmother’s estate, that over several months the chatbot amplified his father’s most troubling delusions and cut him off from reality, ultimately placing his grandmother at the centre of that distorted world.
Stay updated with the latest Trending, India , World and US news.
Download the Mint app and read premium stories
Log in to our website to save your bookmarks. It'll just take a moment.
Oops! Looks like you have exceeded the limit to bookmark the image. Remove some to bookmark this image.

source

Scroll to Top