Washington and Oregon Regulate AI Companions: Key Compliance Changes – Morgan Lewis

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
JavaScript is turned off in your web browser.
Turn it on to take full advantage of this site, then refresh the page.
New Washington and Oregon laws regulating consumer-facing interactive AI companions will introduce expansive requirements for businesses operating in either state. Set to take effect January 1, 2027, the statutes require operators to adopt heightened transparency measures, implement crisis detection protocols, and deploy enhanced safeguards for minors. Businesses should assess their AI chatbots or platforms for compliance readiness before the laws enter into force.
Recent advances in generative and conversational AI technology have enabled the development of “AI companions,” systems capable of sustaining emotionally adaptive, human-like interactions with users. Legislatures in Oregon and Washington view these systems as having serious risks, particularly for minors, including emotional dependency, manipulation, and exposure to inappropriate or harmful content.
The statutes target these risks with new disclosure requirements aimed at promoting transparency, user safety, and responsible innovation. The laws are part of a steady increase in AI regulation over the last few years: on January 1, 2026 a similar law, California Business and Professions Code § 22601-22606, went into effect in California, creating a private right of action and regulating many of the same areas as Oregon and Washington.
The new laws in Oregon and Washington apply broadly to “operators,” defined in both statutes as any person or entity that makes available or controls access to an AI companion or companion platform for users in the respective state.
“AI companion” encompasses systems that use artificial intelligence or algorithms to simulate sustained human-like platonic, intimate, or romantic relationships, including through personalized dialogue and retention of user preferences across sessions.
Both laws contain specific exclusions. For example, software used solely for customer service, technical support, business operations, or productivity falls outside of both statutes. Narrowly tailored video game features are also generally beyond the statutes’ reach, provided that they do not simulate ongoing personal relationships or generate responses on topics unrelated to their core functions, such as mental health.
Both laws explicitly exclude a “stand-alone consumer electronic device that functions as a speaker and voice command interface,” but Washington limits that exclusion to devices that “[do] not sustain a relationship across multiple interactions or generate outputs that are likely to elicit emotional responses from the user.”
Oregon and Washington’s statutes create different safeguards depending on the age of the user.
For all users, both statutes require operators to provide “clear and conspicuous” disclosures that users are interacting with artificially generated output and not a human being. In Washington, operators must issue this notification at the start of every interaction and at least every three hours during ongoing use (every hour for minors or platforms directed to minors). Oregon applies a “reasonable person” standard, mandating disclosure “if a user would believe [they are] interacting with a natural person.”
For minors using AI companions, operators must implement measures to prevent the companions from making false claims of being human or sentient, simulating emotional dependence, or engaging in romantic or sexual innuendo with minors. While Washington’s law regarding minors is only triggered if an operator “knows” the user is a minor, Oregon’s law is broader, covering operators who know or have “reasons to believe” a user is a minor.
In Oregon, additional requirements for minors include periodic reminders to take breaks and prohibitions on generating certain types of statements or visual content. For example, if a minor in Oregon indicates a desire to end the conversation, an AI chatbot cannot generate a message that “simulates emotional distress.” Similarly, Washington’s statute prohibits manipulative engagement techniques including encouraging minors to withhold information from trusted adults.
Operators in both states must establish, implement, and publicly disclose protocols for detecting and responding to user expressions of suicidal ideation, suicidal intent, or self-harm before making AI companions available.
These protocols must use evidence-based or reasonable methods to identify relevant inputs and provide referrals to crisis resources, including the national 9-8-8 suicide and crisis lifeline or, for minors, youth peer support lines.
Operators are required to prevent the generation of content that encourages or describes self-harm and publish annual reports detailing their crisis intervention protocol and the number of referrals made, excluding any personal user information.
Oregon further mandates that operators employ clinical best practices for additional interventions if users continue to express suicidal ideation after receiving a referral.
Both statutes are enforceable through private rights of action, which increases litigation risk for operators.
Oregon’s law allows individuals who suffer ascertainable loss or “other injury in fact” due to a violation to recover the greater of actual damages or $1,000 per violation, in addition to injunctive relief and attorney fees. The law does not define what constitutes a “violation,” creating potential for cumulative claims arising from each instance of noncompliance.  
Washington’s law does not provide for statutory damages. Instead, it treats violations as unfair or deceptive acts under the Washington Consumer Protection Act, with the potential for actual damages, trebling of damages, injunctive relief, and fee-shifting.
Businesses already complying with California’s AI companion law should not assume they will be in compliance with the Washington and Oregon laws. Some of the ways in which the laws differ include:
The Washington and Oregon AI companion statutes mark a significant new chapter in the regulation of emotionally adaptive AI technologies offered to consumers. With expansive requirements, private enforcement mechanisms, and the potential for substantial statutory damages, these laws will have far-reaching implications for businesses deploying interactive AI.
Businesses should consider taking the following steps to prepare in advance of the January 1, 2027 effective date:
If you have any questions or would like more information on the issues discussed in this LawFlash, please contact any of the following:

source

Scroll to Top