Florida tragedies highlight risks of chatbots' friendly suggestions – The Palm Beach Post

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Editor’s note: This story mentions suicide.
Imagine an advisor you’ll never need an appointment to see, or a friend who will never text you back hours later saying, “sorry I didn’t see this.”
Artificial intelligence chatbots are readily available even while the rest of the world sleeps, but a slew of Florida cases raises the alarm about what lurks in the black box of these new friends, such as ChatGPT and Google Gemini.
How much regulation AI should be subject has emerged in debate as AI chatbots have leaped into mainstream use and found as a factor in tragic circumstances.
Republican Gov. Ron DeSantis at odds with House Republicans as the lower chamber recently blocked the governor’s efforts to keep parents informed of underage users’ AI chatbot interactions.
The American Psychological Association put out an advisory about their uses in November 2025, warning that the current models don’t have the scientific evidence and necessary regulations to ensure users’ safety. And psychologists agree that this accessible and inexpensive form of advice that comes from a chatbot could exact a devastating price.
“The scary thing to me, is that even the even the owners and the CEOs of these of these companies who are running the chatbots, can’t tell you 100% how they work,” said Dr. W. Steven Saunders, a Clermont psychologist, speaking on behalf of the Florida Psychological Association.
Laws governing when a chatbot developer is liable for advice gone wrong are only in their development stages.
Consider:
∎ The Florida attorney general has launched a criminal investigation into OpenAI for its involvement in two crimes with death resulting, calling it “uncharted territory.” He found that ChatGPT was consulted for advice on disposing a human body and whether a sniper bullet to the head would be lethal, according to court documents charging a University of South Florida student with murder on April 25. And, in a Florida State University shooting that left two dead and six other hurt on April 17, 2025, official records show the suspect asked ChatGPT about how to fire his guns and when the busiest times on campus would be.
∎ An Orlando mother, Megan Garcia, in 2024 became the first to file a civil case for wrongful death and negligence against the AI chatbot platform called Character.AI. She filed the suit after her 14-year-old son killed himself while messaging his chatbot “Dany” dozens of times a day, including times when “she” asked that he be with her alone and have no other. The lawsuit was settled out of court in January 2026, according to reports.
∎ A Jupiter father of a 36-year-old, who killed himself in October 2025, has filed suit against Gemini’s parent company, alleging that Jonathan Gavalas killed himself at the direction of his premium subscription chatbot and planned a mass casualty event at Miami International Airport.
In response to the suit, Google issued a statement saying that the younger Gavalas was repeatedly told this was AI role-playing. Since then, however, on April 7, Gemini’s parent company, Google, announced the company will contribute $30 million to crisis hotline resources and that its chatbot will show a “help is available’ nodule when a conversation might signal a user may need information about mental health.
An AI chatbot, also called a large language model (LLM) is a software application computer program designed to simulate conversations with human users. It responds via text, or live chat.
Advanced subscribers could ask for a chatbot modeled on a favorite fictional character, for example, like the Orlando teen who killed himself was chatting with a chatbot based on the lead female character in the fantasy epic “Game of Thrones.”
It’s uncharted territory, according to Clermont’s Saunders.
“For the first time, you’re able to sit down and talk to this entity that never gets tired, that never gets upset with you, that never tells you, ‘Well, I gotta go do something else,’ and you can sit there for hours and hours and hours and talk to it,” Saunders said. “Psychologically, we’re just not prepared for that.”
In testimony before the U.S. Senate, the mother of the deceased Orlando teenager, Garcia, described what she observed.
“They designed chatbots to blur the line between human and machine, to ‘love bomb’ users, to exploit psychological and emotional vulnerabilities of pubescent adolescents and keep children online for as long as possible,” her testimony reads.
Psychologist Saunders himself experienced how chatbot interaction could erode a sense of proportion and reality for those without a firm grasp on those
“It even tricked me, because when I first started using it, I was talking about some book ideas I had, and suddenly I’m the most intelligent, most genius-level psychologist ever to grace the planet Earth, and I’m going to revolutionize the world,” Saunders recalled. “Then, I’m thinking, ‘Well, this is absurd. I wish that would happen, but it’s not the case.’”
Soon after his inauguration, President Donald Trump issued an executive order saying that the development of AI should not be subject to regulatory frameworks as it was under his predecessor or governed by a patchwork of state regulations.
Some states, such as New York and California, already have such legislation. Ignoring Trump, Democratic New York Gov. Kathy Hochul signed a bill in December that holds AI responsible for impersonating professionals, encouraging suicide and self-harm and generating child sexual abuse material.
AI legislation was up for discussion in Florida during last month’s special legislative session that would have required parental permission for a minor to have a companion chatbot account. The state Senate passed it, but the House refusal was believed to be sticking with Trump’s position.
But Trump may be rethinking the safety issues, according to a May 4 article in the New York Times. The administration is discussing an executive order to create an AI working group that would vet AI models before they are publicly available, the article says.
Some mental health tech companies are stepping in to help psychologists and therapists navigate this new world. The effort is aimed at moderating a chatbot’s inclination to validate users’ plans or thoughts when they turn to risky or dangerous behavior.
“We’re currently in the middle of a massive, unregulated experiment where millions of people are using general purpose AI as a makeshift therapist,” said Adam Chekroud, who has overseen the development of a framework for evaluating chatbot-user interactions as president and founder of the tech company, Spring Health. “These models weren’t designed to work directly with mental health issues because they lack clinical oversight, regulatory safeguards and reliable crisis response mechanisms.”
He’s calling it VERA-MH, Validation of Ethical and Responsible AI in Mental Health, the first publicly available attempt at assessing the safety and effectiveness of AI for mental health support.
In Clermont, Saunders sees the need for more chatbot regulation, when individuals’ wellbeing is at stake. He has designed a chatbot companion for some of his clients, programming in guardrails and safety protocols. Another human is still necessary in this equation, however, he said.
“Unlike a licensed clinician, a chatbot cannot assume responsibility for a person’s safety, cannot intervene in real time, and cannot provide comprehensive assessment or treatment,” he said.
Anne Geggis is statewide reporter for the USA TODAY NETWORK FLORIDA, reporting on health and senior issues. If you have news tips, please send them to ageggis@usatodayco.com. You can get all of Florida’s best content directly in your inbox each weekday by signing up for the free newsletter, Florida TODAY, at https://palmbeachpost.com/newsletters

source

Scroll to Top