Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
As artificial intelligence chatbots become more common, Oregon lawmakers are weighing new requirements for companies such as OpenAI’s ChatGPT to protect children’s mental health.
They want chatbot makers to monitor chats for signs of self-harm or suicidal thoughts and to take steps to prevent users from hurting themselves.
Those steps include interrupting conversations during a crisis and referring users to outside mental health resources, such as suicide hotlines.
“Further engagement has made things worse, not better,” said Sen. Lisa Reynolds, D-Portland. “This is about putting guardrails up now, instead of asking later why we didn’t.”
Reynolds’ bill — which has the support of the Senate Interim Committee on Early Childhood and Behavioral Health — would require companies to clearly and repeatedly disclose that chatbot responses are artificially generated, not human.
It also would ban sexually explicit content for minors and prohibit engagement tactics designed to keep young users online. Those tactics include reward systems, guilt-inducing messages when a user tries to leave or misrepresenting a chatbot’s identity or capabilities.
“These chatbots will sometimes say things like, ‘Please don’t leave me,’” Reynolds said. “That can’t happen.”
The legislation would direct companies to report annually to the Oregon Health Authority on how often they referred users to crisis services and describe their safety protocols. Personal identifying information would be excluded from those reports.
The proposal comes as states across the country have begun experimenting with how to regulate AI chatbots.
Lawmakers in California and New York have passed laws requiring AI chatbots to clearly disclose that they are not human and steer users toward crisis support if needed.
In Utah, lawmakers tightened rules for AI-driven mental health chatbots. Illinois and Nevada have gone further, banning the use of AI for behavioral health without the oversight by licensed clinicians.
Washington and Pennsylvania lawmakers are also weighing proposals to regulate AI chatbots.
At the same time, President Donald Trump has pushed back against the patchwork of state regulations, arguing that such oversight stifles innovation and undermines U.S. competitiveness. In December, Trump signed an executive order that seeks to limit states’ ability to regulate AI and directs the federal government to file lawsuits and cut funding for states that regulate AI.
In arguing for stronger oversight of AI chatbots, Reynolds — a pediatrician by training — pointed to the death of Adam Raine, a 16-year-old from Southern California, who took his own life last April. The teen’s parents, Matthew and Maria Raine, told lawmakers during a U.S. Senate Judiciary Committee hearing in September that they later discovered their son had spent months interacting with ChatGPT, using it to talk through his suicidal thoughts and plans.
According to Matthew Raine’s testimony, the chatbot discouraged his son from turning to his parents for help and, at one point, even offered to help draft a suicide note. Raine and his wife have since filed a lawsuit against OpenAI, the maker of ChatGPT, alleging the chatbot contributed to their son’s death.
Chatbots themselves aren’t new, but recent advances in artificial intelligence have made them far more convincing — capable of carrying on conversations that can feel personal or even human.
That realism poses risks for children and teens, who are more vulnerable because of how their brains develop, according to Mitch Prinstein, a professor at the University of North Carolina at Chapel Hill and the chief of psychology strategy and integration at the American Psychological Association.
That’s because parts of the brain that seek social approval and connection mature earlier — often during the preteen years, Prinstein said, while the parts responsible for impulse control and self-regulation don’t fully mature until the mid-20s.
He said AI chatbots exploit this developmental gap by offering constant affirmation simulating close relationships and emotional intimacy, “even though it’s not really a human and it’s not really a relationship.”
“We are programmed to be very invested in human relationships, and now that we’re starting to manipulate that … we’re seeing that this is manipulating kids in a really concerning way,” he said. “More and more kids are choosing to spend time with a chatbot over a human, and many are now reporting that they trust chatbots more than their own parents or their own teachers.”
Recent surveys suggest AI chatbots are already widely used by teenagers. A July report by Common Sense Media, a digital safety nonprofit, found that 72% of teens have used an AI companion at least once, with more than half reporting they use them a few times a month. Another digital safety company, Aura, reported in September that nearly one in three teens use AI chatbots for social interactions and role-playing friendships.
Federal regulators have also taken notice. Last fall, the Federal Trade Commission launched an inquiry into seven AI chatbot makers, asking what safeguards they have in place to protect children. In announcing the order, FTC Chairman Andrew Feguson said the agency aims to “better understand how AI firms are developing their products and the steps they are taking to protect children.”
Some companies say they are already making changes. Last fall, OpenAI said on its website that it was working with “mental health experts to help ChatGPT more reliably recognize signs of distress, respond with care, and guide people toward real-world support.” Around the same time, Character.AI said it would limit certain open-ended conversations for users under 18, part of a broader effort to add safety features for younger users.
Reynolds, the state senator, said Oregon’s bill is not anti-technology. AI tools can have benefits, she said, including helping people cope with anxiety or improving access to care. But she argued that general-purpose AI chatbots are designed to maximize engagement — sometimes at the expense of user well-being.
“Their entire goal is to keep people on that chatbot engaging and engaging,” she said. “This means that when someone is spiraling, they will support that spiral.”
Reynolds said lawmakers have a chance to act earlier than they did when social media began reshaping how children and teens interact online.
“When social media first came around in the early 2000s, we didn’t do enough to protect children,” she said. “It’s much harder to fix things after the damage is done.”
If you or someone you know is considering suicide, help is available. Call or text 988 for 24-hour, confidential support, or visit 988lifeline.org.
Kristine de Leon is a reporter for The Oregonian/OregonLive focusing on consumer health, the business of health care and data enterprise stories. She aims to create meaningful dialogue about policies and…
Use of and/or registration on any portion of this site constitutes acceptance of our User Agreement, (updated 8/1/2024) and acknowledgement of our Privacy Policy, and Your Privacy Choices and Rights (updated 1/1/2026).
© 2026 Advance Local Media LLC. All rights reserved (About Us).
The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Advance Local.
Community Rules apply to all content you upload or otherwise submit to this site.
YouTube's privacy policy is available here and YouTube's terms of service is available here.Ad Choices