2026 State Chatbot Laws: Key Provisions and Regulatory Trends – JD Supra

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Orrick, Herrington & Sutcliffe LLP
As of April 2026, a growing number of states have moved to pass laws to regulate chatbots — particularly conversational or so-called “companion” chatbots designed to simulate platonic, intimate or romantic relationship with users — to respond to concerns about the sufficiency of disclosures, warnings and protocols regarding potential mental health and other harms from certain interactions.
Companies who have in-licensed chatbots for customer, user, patient or employee interactions may have limited visibility and control over the full capabilities and design of the product, adding an additional layer of risk. These new state companion chatbot laws are transforming deployments from a UX decision into a regulatory and litigation risk, with statutory damages, potential class actions and heightened scrutiny from state attorneys general.
Recent adoptions focus on transparency and safety protocols — including for youth safety:
Unlike earlier broad AI laws that primarily relied on state attorney general enforcement, several state chatbot laws now grant individuals the right to sue providers directly for statutory damages (e.g., Oregon SB 1546, Washington HB 2225/SB 1546). This trend increases litigation risk and may drive more rigorous compliance practices.
Nearly all new state laws require chatbots to make clear, up-front disclosures that users are interacting with an AI system. These requirements are especially strict when chatbots interact with minors or in contexts where confusion with a human is likely.
Laws increasingly require technical safeguards to detect and respond to suicidal ideation or self-harm expressed by users. The protocols typically include referrals to crisis hotlines or escalation to human moderators with additional measures include content filtering (e.g., blocking sexual content) and mandatory interaction breaks for minors.
A growing number of statutes — alongside federal efforts like the CHATBOT Act — prohibit chatbots from impersonating licensed professionals such as doctors, lawyers or mental health providers. This provision directly addresses risks of consumer deception and unlicensed practice.
Recent California laws address disclosure of training data sources and provenance labeling for AI-generated outputs, raising potential issues for model developers and anyone deploying models in a system with retrieval-augmented-generation.
With chatbot law rapidly evolving at the state level — and significant divergence in specific requirements — it is critical for organizations to stay abreast of new obligations around transparency, minor protection, professional impersonation and user redress. Proactive compliance is essential to minimize enforcement and litigation risks in this fast-changing environment.
[View source.]
See more »
DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.
© Orrick, Herrington & Sutcliffe LLP
Refine your interests »
Join more than 70,000 authors publishing their insights on JD Supra
Back to Top
Explore 2026 Readers’ Choice Awards
Copyright © JD Supra, LLC

source

Scroll to Top