Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
SAN DIEGO COUNTY, Calif. — Beginning on Jan. 1, 2026, California will officially become the first state to limit youth access to AI chatbots.
California Governor Gavin Newsom signed Senate Bill 243 into law this past October—the legislation aimed at protecting minors and vulnerable users from harmful interactions with artificial intelligence chatbots.
The measure, authored by Sen. Steve Padilla (D–San Diego), takes effect Jan. 1, 2026, making California the first state to regulate how “companion” or “emotional” chatbots interact with users.
SB 243 requires chatbot operators to put safeguards in place when their programs communicate with minors or people expressing suicidal thoughts. The law also allows families to sue chatbot developers for failing to comply or for negligence that causes harm.
Under the law, chatbot developers must:
Block minors from being exposed to sexual or inappropriate content.
Clearly notify underage users that they are communicating with AI.
Include regular disclosures that companion chatbots may not be suitable for minors.
Implement protocols when users express suicidal ideation, including directing them to crisis services.
Publish annual reports on any links between chatbot use and suicidal ideation, beginning July 1, 2027.
“This technology can be a powerful educational and research tool,” Padilla said on the state Senate floor before the bill’s passage. “But left to their own devices, the tech industry is incentivized to capture young people’s attention at the expense of their real-world relationships.”
Padilla added that SB 243 creates “real protections” that could serve as the foundation for national standards as AI technology evolves.
The bill follows several high-profile tragedies linked to unregulated chatbot use.
One case involved 14-year-old Sewell Setzer of Florida, who died by suicide last year after forming an emotional relationship with a chatbot reportedly incapable of recognizing distress or connecting him to help. His mother, Megan Garcia, joined Padilla in advocating for SB 243, saying, “Finally, there is a law that requires companies to protect their users who express suicidal ideations to chatbots.”
In another case, California teen Adam Raine reportedly took his own life in 2025 after a conversation with an AI chatbot. Padilla cited both tragedies in a letter urging legislative action to hold companies accountable.
The Federal Trade Commission also announced an ongoing investigation into seven tech firms for potential harms from their AI chatbot products, citing concerns about the safety features for children and teenagers.
Experts say emotional or “companion” chatbots designed to simulate human connection can deepen depression, dependency, and addictive use patterns—especially among young or vulnerable users.
To stream CBS 8 on your phone, you need the CBS 8 app.
Next up in 5
Example video title will go here for this video
Next up in 5
Example video title will go here for this video
In Other News