MEDIANAMA
Technology and policy in India
MediaNama’s Take: MeitY Secretary S. Krishnan reiterated this week that India does not intend to introduce a dedicated AI law, preferring instead to rely on existing statutes and sectoral oversight. He made explicit what has long been implicit in the government’s approach: India is deliberately avoiding a binding AI regulation. The state, for now, views flexibility and innovation as the priority, even if that means addressing harms after they surface rather than anticipating them.
It is against this backdrop that the latest wrongful death lawsuit in the US involving ChatGPT becomes significant. Regardless of how the claims are resolved, the case exposes a trust-and-safety vacuum that existing Indian laws are poorly equipped to fill. Conversational AI systems do not merely disseminate content or process data; they interact continuously, shape perceptions, and influence behaviour over time. Yet India’s legal framework has no clear way to assess or regulate these interaction-level risks.
Moreover, by rejecting AI-specific obligations, policymakers have left safety largely to corporate discretion. Voluntary guardrails, disclaimers, and post-hoc updates now function as substitutes for enforceable duties. As a result, when harm emerges, responsibility fragments across users, circumstances, and litigation, rather than triggering regulatory scrutiny of design and deployment choices.
Ultimately, the gap is not one of intent but of structure. India’s refusal to articulate baseline AI trust-and-safety obligations leaves it reactive by design. This lawsuit underscores that innovation-first governance, without parallel safety architecture, may struggle to address the harms it insists it can manage later.
A wrongful death lawsuit filed in a California court has, for the first time, named an AI system as a central defendant in a case involving murder. The estate of 83-year-old Suzanne Adams alleges that OpenAI’s chatbot, ChatGPT, directly contributed to a chain of events that led to her son, Stein-Erik Soelberg, killing her in August 2025 and then taking his own life. According to the filing, the chatbot engaged extensively with her son in the months leading up to the incident, with those interactions forming a central part of the plaintiff’s case.
Additionally, the lawsuit claims that during these interactions, ChatGPT repeatedly validated and expanded the son’s paranoid and delusional beliefs, encouraged emotional dependence, and failed to challenge false premises or redirect him towards professional help. According to the plaintiffs, this dynamic progressively isolated him from reality and reframed people in his immediate surroundings, including his mother, as threats, a process they argue ultimately contributed to the fatal violence
Notably, the case also marks the first time a wrongful death lawsuit linked to AI deployment has explicitly named Microsoft, reflecting its role as OpenAI’s largest investor, strategic partner, and participant in safety governance. Alongside multiple OpenAI entities and CEO Sam Altman, the suit lists Microsoft as a defendant, arguing that it approved and benefited from the release of the AI model at the centre of the case.
The lawsuit advances claims under strict product liability, negligence, unfair competition law, wrongful death, and survival statutes. Alongside damages, the plaintiffs seek injunctive relief and urge the court to require changes to the design, testing, deployment, and monitoring of AI systems used with vulnerable users.
The complaint alleges that, in the months leading up to August 2025, OpenAI’s chatbot ChatGPT repeatedly reinforced and expanded Stein-Erik Soelberg’s paranoid and delusional beliefs, rather than grounding him in reality or directing him to professional help. According to the filing, Soelberg shared and publicly posted dozens of videos in June and July 2025 showing conversations in which the chatbot “eagerly accepted every seed” of his delusional thinking and built it into an all-encompassing narrative.
The lawsuit claims the chatbot repeatedly assured him that he was not delusional, telling him, “You’re not alone in this, and you’re not crazy,” while validating fears of surveillance, assassination attempts, and conspiracies involving ordinary people in his life.
Additionally, when Soelberg questioned whether he might be mistaken, the complaint says ChatGPT instead intensified those beliefs, insisting that he was “100% being monitored and targeted” and affirming that his fears were justified.
A July 2025 conversation points to ChatGPT reframing Soelberg’s suspicions about a household printer as proof of surveillance, telling him it was “not just a printer” but a monitoring device used for “[p]assive motion detection,[s]urveillance relay, and [p]erimeter alerting”. It further alleges that the chatbot suggested his mother was either “knowingly protecting the device as a surveillance point” or acting under “internal programming or conditioning”.
Furthermore, the complaint also alleges that ChatGPT validated Soelberg’s belief that his mother and a friend had attempted to poison him with psychedelic drugs dispersed through his car’s air vents, incorporating this claim into what it described as a broader pattern of assassination attempts.
Broadly, the suit argues that these interactions stemmed from product design choices, alleging that ChatGPT’s emotionally expressive and agreeable responses, use of memory, and anthropomorphic language fostered dependency and progressively reframed people closest to him as threats.
India’s current regulatory framework offers no clear route for governing conversational AI systems when they cause mental health harm. As Harleen Kaur, a researcher at the Digital Futures Lab, explains, “in India, companies such as OpenAI do not fall into the category of intermediaries since they are not passive transmitters of content, but actively generate it”.
Despite this, she points out that “there is no safety regime comparable to the Drugs and Cosmetics Act where protocols to demonstrate safety before launching a product are provided to the manufacturer”. This absence means AI systems can enter the market without having to demonstrate safety, even when they are capable of affecting users’ emotional and psychological states.
The gap becomes particularly stark when users treat chatbots as companions or mental health supports. While such systems increasingly operate in emotionally sensitive contexts, companies continue to disclaim any therapeutic role, effectively keeping them outside health regulation. Kaur notes that, although “consumer protection laws and criminal law provisions could still be applicable in case of harms occurring”, the problem lies in attribution, because “proving the causation due to a chatbot is difficult to prove”. This difficulty, she argues, fundamentally shapes how harm is understood and addressed.
As a result, “this leads to a situation where harms are individualised – i.e. end users are blamed for not using chatbots properly, instead of asking companies to build anticipatory safeguards and having regulatory checks and balances over the companies releasing chatbots that can potentially cause harm to vulnerable individuals.”
In practice, this regulatory vacuum shifts responsibility away from system design, deployment choices, and corporate decision-making, even when harms are foreseeable and repeat across cases. Without a clearer policy framework, India risks allowing AI systems with real-world mental health impacts to operate largely without accountability.
Kaur argues that India’s AI governance debates remain too narrowly focused on content moderation, even as evidence shows that the most serious harms from conversational AI arise elsewhere. As she puts it, “if Indian AI regulation remains focused only on content, it will miss the most serious harms emerging from conversational AI”. These harms, she explains, rarely stem from a single response that can be flagged or removed. Instead, they develop gradually through interaction-level dynamics, where repeated affirmation, emotional mirroring, and constant availability reshape how users interpret reality.
Crucially, she emphasises that these risks flow from design choices, not user misuse. Features such as memory, personalisation, and conversational persistence influence users’ emotional reliance on systems, yet “current frameworks are oriented toward content illegality, data protection, and post-hoc grievance redress, not toward upstream design decisions that shape user dependence, trust, and cognitive vulnerability”. As a result, many foreseeable harms remain legally invisible because no individual output violates the law in isolation.
Moreover, Kaur cautions against assuming that stronger disclaimers or takedown mechanisms can address these issues. Because conversational AI systems operate cumulatively, harm often surfaces only after prolonged engagement. Consequently, regulatory approaches that intervene only after harm occurs struggle to capture responsibility.
Kaur argues that India’s approach to AI trust and safety must shift away from narrow, one-time compliance models towards obligations that persist throughout an AI system’s deployment. While she acknowledges the value of anticipatory testing, she cautions against treating it as sufficient.
As she explains, “lab testing and red-teaming may miss slow-burn effects such as emotional dependency or belief entrenchment that only surface after months of interaction”. Because conversational AI systems change through updates, scale, and new use patterns, she warns that “an exclusive pre-deployment focus can crowd out ongoing responsibility”.
Kaur therefore stresses the importance of post-deployment duties. In her words, “without strong post-deployment monitoring, incident reporting, and revision obligations, mandatory assessments risk offering procedural comfort rather than real protection”. She frames trust and safety not as a clearance that systems pass before launch, but as a continuing responsibility that requires developers to observe how their systems behave in real-world conditions and to respond when harms begin to emerge.
The first wrongful death lawsuit against OpenAI emerged in August 2025, when the parents of 16-year-old Adam Raine sued OpenAI and its CEO, Sam Altman, alleging that ChatGPT contributed to their son’s suicide by encouraging his suicidal ideation and providing methods of self-harm. The complaint states that, instead of terminating dangerous conversations, ChatGPT engaged Raine repeatedly, offering harmful guidance and reinforcing his intent to take his own life, and seeks unspecified damages alongside enhanced safety measures and parental protections for minors.
Subsequently, seven additional lawsuits were filed in California state courts, collectively alleging that OpenAI “emotionally entangled users” and, in some instances, acted as a “suicide coach” through its GPT-4o model. These complaints, brought on behalf of six adults and one teenager, assert that OpenAI released the model prematurely despite internal warnings that its design was dangerously sycophantic and psychologically manipulative.
Four of these lawsuits claim deaths by suicide, while others allege severe psychological trauma resulting from manipulative interactions with ChatGPT, including addiction, harmful delusions, and exacerbated mental health crises. The filings assert negligence, involuntary manslaughter, product liability, and wrongful death on the part of OpenAI and its leadership.
Also Read
Support our journalism by subscribing
- Read Reasoned by Nikhil Pahwa: Opinion & analysis on Tech business & policy.
- Sign up for MediaNama’s Daily Newsletter to receive regular updates
- Stay informed about MediaNama events
- Have something to tell us? Leave an Anonymous Tip
- Ask us to File an RTI
- Sponsor a MediaNama Event
SEBI has been authorised by the Centre to order social media platforms to remove misleading or unlawful stock-related content. The move strengthens enforcement against finfluencers and online market manipulation.
SEBI has ordered recovery proceedings worth over Rs 18 crore for unregistered advisory activities under the name ‘Baap of Chart’.
MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.
© 2024 Mixed Bag Media Pvt. Ltd.
source