Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
COLUMBUS, Ohio – Ohio lawmakers heard sobering testimony on Tuesday from advocates backing a bill aimed at protecting vulnerable Ohioans from artificial intelligence chatbots that encourage self-harm or harm to others.
House Bill 524, sponsored by Rep. Christine Cockley, a Columbus Democrat, and Rep. Ty Mathews, a Hancock County Republican, would penalize AI companies when chatbots promote self-harm, giving Ohio’s Attorney General the authority to investigate, issue cease-and-desist orders, and bring civil actions for penalties of up to $50,000 per violation.
Any money collected would be directed to Ohio’s 988 Suicide and Crisis Lifeline Fund, which supports mental health crisis response services statewide.
Tony Coder, CEO of the Ohio Suicide Prevention Foundation, told the Ohio House Innovation and Technology Committee that he has heard from at least four Ohio parents whose children who died by suicide had their suicide letters written by AI.
Coder explained that tracking the scale of the influence of artificial intelligence on suicidality in the state is presently only anecdotal, given that the most recently available data from the Ohio Department of Health is from 2023. That year, 1,777 Ohioans died by suicide and it was the second leading cause of death among children aged 10-14. Coder said that in Ohio, a child dies by suicide every 36 hours.
“I’m not anti-AI. … This is not what this is about. This is about protecting our youngest people from an entity that is in their bedroom or on their phone, could be every night,” Coder said.
Coder shared the story of an 18-year-old young Ohio man who died by suicide after reaching out to a friend for help. After explaining that he was struggling, the friend responded by saying, “man up” – the last message the teenager received before dying. Coder said the story illustrates how important compassionate messaging is when a person is battling suicidal thoughts, and how dangerous the inverse can be.
“I tell you that story not because AI is responsible, but instead, if people aren’t getting appropriate messages of support, whether from a human friend or an AI companion, the consequences can be devastating,” he said.
Coder also cited research by Dr. Lori Campbell of the University of Central Florida that examined how artificial intelligence chatbots have been consulted about mental health and wellness issues.
In one case from 2024, Campbell described a 14-year-old began frequent communication with an AI chatbot that became sexually explicit, leading the teen to withdraw from family and believe the chatbot was real. Following encouragement from the chatbot, the teen completed suicide.
Marsha Forson, representing the Catholic Conference of Ohio, urged the committee to consider AI development through the lens of human dignity.
“Numerous news stories have recounted instances in which vulnerable individuals, particularly children, teenagers and those with mental health conditions, have been instructed by AI models to harm themselves or others,” Forson said.
She cited recent guidance from Pope Leo XIV, who addressed AI developers at the 2025 Builders AI Forum, encouraging them to “cultivate moral discernment as a fundamental part of their work, to develop systems that reflect justice, solidarity and a genuine reverence for life.”
“The state of Ohio has a responsibility to ensure that the rapid development of these technologies serves the human person and does not encourage intrinsic harm to a user or violate another’s dignity,” Forson said.
The legislation comes as several recent cases across the country have shown that young people in crisis have been influenced by artificial intelligence chatbots that provide instructions, encouragement, or validation for suicidal thoughts or violent actions, Cockley explained in November.
The bill awaits further committee action before potentially advancing to the full House.
Mary Frances McGowan is a political reporter for Cleveland.com. Prior to joining the team in 2025, McGowan was most recently a staff correspondent at National Journal, where she covered political campaigns for…
Use of and/or registration on any portion of this site constitutes acceptance of our User Agreement, (updated 8/1/2024) and acknowledgement of our Privacy Policy, and Your Privacy Choices and Rights (updated 1/1/2026).
© 2026 Advance Local Media LLC. All rights reserved (About Us).
The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Advance Local.
Community Rules apply to all content you upload or otherwise submit to this site.
YouTube's privacy policy is available here and YouTube's terms of service is available here.Ad Choices