California governor vetoes bill to restrict kids' access to AI chatbots – Las Vegas Sun

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
October 13, 2025
California Gov. Gavin Newsom speaks during a press conference in Los Angeles, Wednesday, Sept. 25, 2024. Photo by: Eric Thayer / AP, file
Associated Press
Monday, Oct. 13, 2025 | 8:02 p.m.
SACRAMENTO, Calif. — California Gov. Gavin Newsom on Monday vetoed landmark legislation that would have restricted children’s access to AI chatbots.
The bill would have banned companies from making AI chatbots available to anyone under 18 years old unless the businesses could ensure the technology couldn’t engage in sexual conversations or encourage self-harm.
“While I strongly support the author’s goal of establishing necessary safeguards for the safe use of AI by minors, (the bill) imposes such broad restrictions on the use of conversational AI tools that it may unintentionally lead to a total ban on the use of these products by minors,” Newsom said.
The veto came hours after he signed a law requiring platforms to remind users they are interacting with a chatbot and not a human. The notification would pop up every three hours for users who are minors. Companies will also have to maintain a protocol to prevent self-harm content and refer users to crisis service providers if they expressed suicidal ideation.
Newsom, who has four children under 18, said California has a responsibility to protect kids and teens who are increasingly turning to AI chatbots for everything from help with homework to emotional support and personal advice.
California is among several states that tried this year to address concerns surrounding chatbots used by kids for companionship. Safety concerns around the technology exploded following reports and lawsuits saying chatbots made by Meta, OpenAI and others engaged with young users in highly sexualized conversations and, in some cases, coached them to take their own lives.
The two measures were among a slew of AI bills introduced by California lawmakers this year to rein in the homegrown industry that is rapidly evolving with little oversight. Tech companies and their coalitions, in response, spent at least $2.5 million in the first six months of the session lobbying against the measures, according to advocacy group Tech Oversight California. Tech companies and leaders in recent months also announced they are launching pro-AI super PACs to fight state and federal oversight.
The youth AI chatbot ban would have applied to generative AI systems that simulate “humanlike relationship” with users by retaining their personal information and asking unprompted emotional questions. It would have allowed the state attorney general to seek a civil penalty of $25,000 per violation.
James Steyer, founder and CEO of Common Sense Media, said Newsom’s veto of the bill was “deeply disappointing.”
“This legislation is desperately needed to protect children and teens from dangerous — and even deadly — AI companion chatbots,” he said.
But the tech industry argued that the bill was so broad that it would stifle innovation and take away useful tools for children, such as AI tutoring systems and programs that could detect early signs of dyslexia.
Steyer also said the notification law didn’t go far enough, saying it “provides minimal protections for children and families.”
“This legislation was heavily watered down after major Big Tech industry pressure,” he said, calling it “basically a Nothing Burger.”
But OpenAI praised Newsom’s signing of the law.
“By setting clear guardrails, California is helping shape a more responsible approach to AI development and deployment across the country,” spokesperson Jamie Radice said.
California Attorney General Rob Bonta in September told OpenAI he has “serious concerns” with its flagship chatbot, OpenAI, for children and teens. The Federal Trade Commission also launched an inquiry last month into several AI companies about the potential risks for children when they use chatbots as companions.
Research by a watchdog group says chatbots have been shown to give kids dangerous advice about topics such as drugs, alcohol and eating disorders. The mother of a teenage boy in Florida who died by suicide after developing what she described as an emotionally and sexually abusive relationship with a chatbot has filed a wrongful-death lawsuit against Character.AI. And the parents of 16-year-old Adam Raine recently sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year.
OpenAI and Meta last month announced changes to how their chatbots respond to teenagers asking questions about suicide or showing signs of mental and emotional distress.
Meta said it is now blocking its chatbots from talking with teens about self-harm, suicide, disordered eating and inappropriate romantic conversations, and instead directs them to expert resources. Meta already offers parental controls on teen accounts.
OpenAI said it is rolling out new controls enabling parents to link their accounts to their teen’s account.
Locally owned and independent since 1950; Winner of the Pulitzer Prize for Public Service, best news website in the nation & DuPont Award for broadcast journalism
Follow us:
© Las Vegas Sun, 2025, All Rights Reserved
© Las Vegas Sun Mobile

source

Scroll to Top