TCAI Bill Guide: Oregon’s AI chatbot safety bill, SB 1546 – Transparency Coalition

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Oregon’s SB 1546 is an AI chatbot safety bill that would require disclosures around the non-human nature of the chatbot, and protocols around suicide, self-harm, and interactions with minors. (Illustration: Getty Images for Unsplash+.)
During the 2026 legislative session, TCAI will offer clear, plain-language guides to some of the most important AI-related bills introduced in state legislatures across the country.
Feb. 3, 2026 — Oregon’s SB 1546 is an AI chatbot safety measure based on the learnings from similar bills adopted in California and New York in 2025.
Oregon legislators got their first full discussion of SB 1546 at an informational hearing held by the Senate Committee on Early Childhood and Behavioral Health on Feb. 3, 2026. The video clips below are from that hearing.
The original full text of SB 1546 is here. The bill’s progress, and revised versions, may be found here.
The bill’s sponsor, Sen. Lisa Reynolds, offers a synopsis of the bill with Transparency Coalition COO Jai Jaisimha:
A post shared by Lisa Reynolds (@senatorlisareynolds)
SB 1546 requires operators of artificial intelligence (AI) chatbots to issue certain notifications and implement precautions for all users, and adds additional protocols for a user who the operator has reason to believe may be a minor.
The bills require operators of AI chatbots to:
tell users they are talking to AI, not a human,
implement protocols for preventing outputs that cause suicidal feelings or thoughts,
implement special protocols if the AI system operator has reason to believe the user is a minor, and
report each year to the Oregon Health Authority concerning incidents in which users were referred to resources to prevent suicidal ideation, suicide, or self-harm.
The bill also allows a user who has suffered ascertainable harm to bring an action for damages and injunctive relief.

Dr. Mitch Prinstein, a professor of psychology and neuroscience at the University of North Carolina, is the senior science advisor of the American Psychological Association (APA). These are excerpts from his testimony on Feb. 3, 2026:
“Toddlers need to form deep interpersonal connections with human adults to develop language, learn relationship skills, and to regulate their biological stress and immune systems.”
“Chatbots are not an adequate substitute, although almost half of young children are currently interacting with AI daily, blurring the lines between facts and fantasy.”
“Adolescents are no less vulnerable. Brain development across puberty creates a period of hypersensitivity to positive feedback, while teens are still unable to stop themselves from staying online longer than they should. AI exploits this neural vulnerability with chatbots that can be obsequious, deceptive, factually inaccurate, yet disproportionately powerful for teens.
“More and more teens are interacting with chatbots, depriving them of opportunities to learn critical interpersonal skills. Science shows that failure to develop these skills leads to lifetime problems with mental health, chronic medical issues, and even early mortality.”
“Part of the problem is that AI chatbots are designed to agree with users about almost everything, but real human relationships are not frictionless. We need practice with minor conflicts and misunderstandings to learn empathy, compromise, and resilience. This has created a crisis in childhood. Science reveals that many youth are now more likely to trust AI than their own parents or teachers.”

Danica Noble, a member of the Washington State Parent Teacher Association’s advocacy committee, spoke on Feb. 3. Noble has a 20-year career in federal antitrust and consumer protection law enforcement, and is co-chair of the Washington State Bar Association’s Antitrust, Consumer Protection, and Unfair Business Practices Section. Excerpts from her testimony:
“AI has really evolved, and with it…comes the next business model, which has been described as attachment hacking or the attachment economy. The idea is to get the users to stay online as much as possible, to extract as much data as possible.
The way this is happening with chatbots is different than with social media. With social media, ‘enragement equals engagement.’ Whereas with chatbots, they’re overly encouraging and sycophantic. These chatbots aren’t just going for attention, they’re going for attachment.
The Character AI CEO said the quiet part out loud when he said, ‘We’re not trying to replace Google with these chatbots. We’re trying to replace your mom.’”
The popularity of AI-driven companion chatbots is skyrocketing, especially among American teenagers and young adults.
This guide is meant to inform lawmakers, industry leaders, developers, and thought leaders as they consider the best ways to safely and properly govern these powerful new AI products.
This TCAI bill tracker lists all AI chatbot-related measures introduced or carried over into the 2026 legislative session.
A recent study of ChatGPT-5 and OpenAI’s ‘safer’ system found the updated chatbot to be more engaging, more persuasive, and in many cases more harmful than its predecessor.
OpenAI now faces eight separate lawsuits alleging negligence, wrongful death, and other claims arising from ChatGPT’s manipulative design.
Transparency Coalition
HOME
LEARN
NEWS
ABOUT
KEY TERMS
TERMS OF USE
PRIVACY POLICY
Follow Us!
Sign up with your email address to receive news and updates.

© 2026 Transparency Coalition.ai. All Rights Reserved

source

Scroll to Top