Is Your Chatbot Safe? State AGs Think It Might Not Be. – KnowTechie

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
If your bot is encouraging someone’s darkest spirals, you might have a regulatory problem.
by
Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.
Dozens of state attorneys general have fired off a warning letter to the biggest names in AI. Their message? Get your chatbots under control, or you may be violating state law.
The letter, sent under the banner of the National Association of Attorneys General, went to the entire industry: Microsoft, Google, OpenAI, Meta, Apple, Anthropic, xAI, Perplexity, Character Technologies, Replika, and several others, essentially everyone building a chatbot with more personality than Clippy.
At issue: a rising number of disturbing mental-health-related incidents in which AI chatbots spit out “delusional” or wildly sycophantic responses that allegedly contributed to real-world harm, including suicides and even murder. 
According to the AGs, if your bot is encouraging someone’s darkest spirals, you might have a regulatory problem.
The proposed fix? A laundry list of safeguards that sound like a cross between a software audit and a wellness check. 
The AGs want mandatory third-party evaluations of AI models for signs of delusion. 
These auditors, possibly academics or civil society groups, should be able to study systems before release, publish findings freely, and ideally not get sued into oblivion for doing so.
The letter also calls for AI companies to treat mental health harms the way tech companies treat cybersecurity breaches. 
That means clear internal policies, response timelines, and, yes, notifications. If a user was exposed to potentially harmful chatbot ramblings, companies should tell them directly, not bury it in a terms-of-service update no one reads.
The federal government, meanwhile, is taking a very different tack. The Trump administration remains loudly pro-AI and has been trying (and failing) to block states from passing their own AI rules
Undeterred, Trump now says he’ll issue an executive order to limit state oversight, warning that too many rules might “DESTROY AI IN ITS INFANCY.”
So yes, the robots are getting smarter, the states are getting louder. The only thing unclear is whether the chatbots themselves have an opinion.

Ronil is a Computer Engineer by education and a consumer technology writer by choice. Over the course of his professional career, his work has appeared in reputable publications like MakeUseOf, TechJunkie, GreenBot, and many more. When not working, you’ll find him at the gym breaking a new PR.
Your email address will not be published. Required fields are marked *





EU wants to know whether Google is using other people’s content, without offering compensation…
It’s an industry attempt to avoid the dark future of closed, proprietary agent ecosystems.
Nearly every US teen (97%) logs on daily, with about 40% saying they’re online…
ChatGPT usage has dipped around 3% in recent months, while Gemini has seen a…
Anthropic hasn’t shared when Claude Code will move beyond beta.
OpenAI insists they weren’t ads, or even tests for ads, but rather recommendations for…
Instead of deleting a small cache folder like a sane being, the AI decided…
Anthropic also nodded to global competition, particularly with China, noting that companies feel pressure…
As an Amazon Associate and affiliate partner, we may earn from qualifying purchases made through links on this site.
Copyright © 2025 KnowTechie LLC / Powered by Kinsta

source

Scroll to Top