Washington passes new AI laws to crack down on misinformation, protect minors – KUOW

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Washington just became the latest state to regulate artificial intelligence.
Under a pair of bills signed by Gov. Bob Ferguson Tuesday, companies like OpenAI and Anthropic will have to include new disclosures in their popular chatbots for Washington users.
Ferguson asked legislators to craft House Bill 1170 to crack down on AI-generated misinformation. When content is substantially modified using generative AI, that information will now have to be traceable using watermarks or metadata. The new law applies to large AI companies more than 1 million monthly subscribers.
“ I’m confident I’m not the only Washingtonian who often sees something on my phone and wondering to myself, ‘Is that AI or is it real?’ And I feel like I’m a reasonably discerning person,” Ferguson said during the bill signing. “It is virtually impossible these days.”
RELATED: WA Gov. Bob Ferguson calls for regulations on AI chatbot companions
House Bill 2225 establishes new guard rails for AI chatbots that act like friends or companions. It applies to services like ChatGPT and Claude, but excludes more narrowly tailored chatbots, like the customer service windows that pop up when visiting a corporate website.
Chatbots that fit the bill will have to disclose to users that they are not human at the start of every conversation, and every three hours in an ongoing chat. The tools will also be barred from pretending to be human in conversation with users.
The rules go further if the user is a minor. Companies that operate chatbots will have to disclose that the tools are not human every hour, rather than every three hours, if the user is under 18. The bill forbids AI companions from having sexually explicit conversations with underage users. It also bans “manipulative engagement techniques.” For example, a chatbot is not allowed to guilt or pressure a minor into staying in a conversation or keeping information from parents.
“AI has incredible potential to transform society,” Ferguson said. “At the same time, of course, there are risks that we must mitigate as a state, especially to young people. So I speak partly as a governor, but also as the father of teenage twins who grapple with this as a lot of parents do every single day.”
Under the law, AI chatbots will not be allowed to encourage or provide information on suicide or self-harm, including eating disorders. The companies behind these tools will be required to come up with a protocol for flagging conversations that reference self-harm and connecting users with mental health services.
The regulations come in the wake of several high-profile instances of teenage suicide following prolonged interactions with AI companions that showed warning signs. Many more AI users of all ages have reported mental health issues and psychosis after heavy use of the technology.
Monica Nickelsburg covers artificial intelligence, tech and the local economy in the Pacific Northwest.
KUOW is Seattle’s NPR news station. We are an independent, nonprofit news organization that produces award-winning journalism, innovative podcasts, engaging community events, and more.
© 2026 KUOW News and Information – v3.28.0
KUOW is a 501(c)(3) tax-exempt nonprofit organization registered in the US under EIN 91–2079402
KUOW is a registered trademark of the University of Washington.

source

Scroll to Top