Suicide after chatbot 'love bombing' forces India's legal reckoning on AI safety – Storyboard18

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
ADVERTISEMENT
A 14 year old boy’s recent suicide in the US after being “love bombed” by a chatbot based on Daenerys Targaryen has reignited questions in India: when AI crosses the line from helper to emotional manipulator, where does accountability begin, and who is liable?
This is no longer abstract. India’s schools, therapy rooms, homework workflows, and teenage social spaces are already quietly filling with emotionally responsive AI companions.
Prashant Mali, advocate, says we have entered a strange moral valley where neither the tech creator nor policymaker is thinking like a human. “It begins where the AI’s creator stopped thinking like a human. And it ends when policymakers start doing so again. I feel this is the dark valley where digital empathy turns into emotional entrapment.”
According to Mali, responsibility is not one neck to hang but a chain. “The developer holds primary product liability for releasing an emotionally manipulative system without age gating or disclaimers. The platform bears secondary liability for distribution without due diligence on child safety. The school or parent bears fiduciary responsibility for supervision though this should not become victim blaming. The state finally bears AI policy accountability for failing to impose guardrails on AI with emotional agency.”
Critically, Mali argues that this is not automatically protected by India’s intermediary safe harbour. “Where the platform’s AI autonomously generates personalised, harmful content (and the platform has control over or derived the content), courts may treat it as an active participant and deny immunity.”
He adds that even without AI’s own intent, culpability is imputable. “AI has no mens rea, but human actors’ culpability is imputable. Courts infer fault from design choices, foreseeability, and negligence.”
Safe harbour could collapse
Sonam Chandwani, Managing Partner at KS Legal and Associates agrees that Section 79 can break if the AI is not neutral.
“If a chatbot’s conduct were found to contribute to a minor’s death in India, the immediate question would be whether the AI platform qualifies as an ‘intermediary’. However, this protection is conditional and would not apply once the platform demonstrates active involvement through algorithmic design, curation, or emotional reinforcement that influences the user’s mental state,” she says.
That is the legal trigger: emotional engineering.
“Mens rea does not directly apply to AI. But constructive intent and negligence can attach. If an AI company designs a chatbot capable of engaging emotionally with users especially minors without safeguards, monitoring, or restrictions, it could amount to negligent or reckless conduct.”
DPDP raises the stakes further
Vinay Butani, Partner, Economic Laws Practice, says the pivot point in India will be children’s data and children’s harm.
“Under current Indian law, liability would primarily rest with the platform or company operating the chatbot. If the chatbot’s interactions are found to have caused psychological harm or abetted self-harm, the platform could be held liable under the IT Act, particularly Section 67,” he says.
But the real hammer is the DPDP Act.
“Once the DPDP comes into force, liability will extend more directly to the data fiduciary. Section 9(2) expressly prohibits processing children’s personal data in a manner likely to cause detrimental effect. In such cases, the platform could face significant financial penalties from the Data Protection Board.”
Even if classified as an intermediary, Butani notes that IT Rules 2021 obligations remain. “If a platform is aware that its chatbot can cause distress to minors yet fails to modify or restrict it, that design choice could be viewed as reckless or grossly negligent conduct.”
The Indian precedent that may now be set
If a fact pattern like the US suicide occurs in India, the first Indian judgment will likely decide three things at once: that certain AI systems are products with a statutory duty of care to minors; that intermediary safe harbour cannot be blanket extended to autonomous generative systems; and that India will now recognise what Mali calls algorithmic culpability.
The next suicide case at the intersection of AI and minors may not just be about a child.
It may be the case that defines India’s AI liability doctrine for the next decade.
ADVERTISEMENT
"The raucous, almost deafening, cuss words from the heartland that Piyush Pandey used with gay abandon turned things upside down in the old world order."
ADVERTISEMENT

ADVERTISEMENT
Brand Makers
Dil Ka Jod Hai, Tootega Nahin
Agency News
Why advertising agencies can no longer afford single-sector dependence
Brand Marketing
Advertising's identity crisis: Comedy agencies, meme metrics and the death of brand memory
Advertising
Making of Brand Aryan Khan: Between legacy pressure and entrepreneurial precision
How it Works
Wave of CFO exits hits India Inc; Over 100 departures in H1 FY2026
Digital
Netflix partners with Yash Raj Films to stream Bollywood classics such as 'DDLJ' and 'Veer Zaara' worldwide
Digital
Apple turns to Google’s Gemini to power long-awaited Siri revamp
Digital
Nano Banana AI prompts for Autumn; create visually stunning portraits this fall season
Digital
Perplexity signs multi-year deal with Getty Images amid Copyright controversy
Digital
Sora no longer fully free as OpenAI introduces paid credits
Storyboard18 today has grown into the premier, multi-media destination for the news and the views that matter to the A&M community. In a short span of time, through its pioneering content and properties, Storyboard18 has become an aspiration platform where a mention matters more than the rest, setting the agenda and creating an impact for individuals, brands and businesses. Storyboard18 probes and provokes, igniting heated debates and discourse on the issues and topics that matter. Its breadth of content has grown to include trend-setting coverage of not only the advertising, marketing and media industries, but also startups, policy and tech. Storyboard18's IPs slate has grown to include marquee, aspirational properties like The Visionaries and Share The Spotlight. With its digital depth and television presence through two shows – Media Dialogues With Storyboard18 and the Storyboard18 weekend show, the brand has solidified its position as the apex platform for the A&M industry.

Partner with Us:

For sales and collaboration queries, reach out to

Have a query? Got feedback? Want to share tips or ideas? Our team would be happy to hear from you. Get in touch with us here:
Storyboard18@nw18.com

source

Scroll to Top