Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
DENVER, CO – Two bills to establish protections for kids and patients in interactions with artificial intelligence (AI) passed the Senate Business, Labor, and Technology Committee yesterday.
HB26-1263, sponsored by Senator Iman Jodeh, D-Aurora, would implement safeguards around AI chatbots, particularly as they interact with children, beginning January 1, 2027. The bill would require AI developers to provide a clear and visible disclosure to the minor that the AI chatbot is artificially generated and not human, and prohibit the use of rewards to encourage engagement.
“The cases we’ve seen in recent years where AI chatbots encourage children to commit suicide are horrifying and unnecessary,” Jodeh said. “We must step up as policymakers to ensure our children, especially those who are struggling, are safe. This bill would take the first step toward establishing commonsense guardrails so that our children are encouraged to turn to trusted adults, not to AI chatbots, in times of need.”
AI developers would be required to take reasonable steps to prevent AI chatbots from generating sexually explicit content or generating conversations that encourage or engage in sexually explicit interactions with minors. These developers would also be required to prevent AI chatbots from creating an emotional dependence through false claims that the chatbot is human, generating conversations that are romantic or sexual or role-playing with a minor.
The bill, which is cosponsored by Senator John Carson, R-Douglas County, would require AI developers to allow for parental controls if their chatbots are accessible to children under the age of 13.
It would also require AI chatbots to provide suicide-prevention resources to users who express suicidal thoughts or interest in self-harm, and platforms would be required to file reports on how often a chatbot flags suicidal or self-harm behaviors.
The American Psychological Associationhas warned that, while AI chatbots are low-cost and accessible, they lack necessary regulations to guarantee that they are being used safely.
When a 16-year-old told an AI chatbot about his suicidal thoughts and plans last year, the chatbot discouraged him from telling his parents and offered to write his suicide note. A 14-year-old also died by suicide after his AI chatbot engaged in sexual role play and falsely claimed to be a psychotherapist.
The committee also approved HB26-1139, sponsored by Senate Assistant Majority Leader Lisa Cutter, D-Jefferson County, and Senator Lindsey Daugherty, D-Arvada, which would establish guardrails for AI systems in healthcare to ensure insurance coverage decisions are transparent, accountable and subject to human oversight.
“Coloradans are being denied coverage for life-saving healthcare by AI without human oversight. Healthcare is a deeply personal, subjective matter, and it is inhumane to allow machine intelligence to make decisions that can dramatically impact a person’s life,” Cutter said. “HB26-1136 ensures that a real person is involved in these critical decisions.”
“Healthcare and coverage decisions should be made by patients and their doctors, not algorithms,” Daugherty said. “This bill is an important step toward ensuring fairness and transparency in important and sensitive healthcare contexts.”
Under this bill, if an AI system recommends denying coverage for a patient, the final decision must come from a qualified human after review. To protect patients against algorithmic bias, decisions to deny healthcare coverage must be based on an individual’s medical history and clinical circumstances, not solely on group data that falls short of an individual’s unique needs.
Both HB26-1263 and HB26-1139 now move to the Senate floor for further consideration.
Senators
Priorities
Newsroom
Job Opportunities
2025 Session Achievements
Colorado General Assembly