Is your next therapist an AI chatbot? For many it already is – Florida Today

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
It’s 3 a.m. and you’re huddled over your keyboard, eyes darting between tabs as sunrise looms. On a whim, you open ChatGPT and find yourself typing, “I’m feeling off. Any suggestions?” Within moments, the screen blooms with warmth. “I’m sorry you’re not feeling like yourself,” it begins, before quickly sharing ten mood-boosters and an offer to provide deeper guidance if you need it.
When therapy feels out of reach, more people are finding comfort from an unexpected source — AI-powered chatbots, which are rapidly emerging as one of the fastest-growing tools for mental health support in the U.S., offering round-the-clock conversation and coping skills.
A recent survey by the nonprofit Sentio Marriage and Family Therapy and Counseling Center reveals 48.7% of respondents said they are utilizing AI chatbots, powered by large language models, like ChatGPT, Claude and Gemini, for therapeutic support — and the majority (63%) said it improved their mental health and well-being.
There’s a catch, though: AI chatbots aren’t regulated in the same way as licensed therapists so questions of true privacy remain unsettled.
As a licensed clinical psychologist, Katherine Schafer doesn’t just recommend ChatGPT to help manage the mental health concerns of her patients — she also uses it herself.
“I have used, and loved, ChatGPT during mental distress, and I will continue do so,” she said.
And she’s not alone. When it comes to general purpose AI chatbots, Open AI’s ChatGPT is a dominant force. As of July 2025, ChatGPT accounted for approximately 80% of all AI chatbot sessions in the U.S. — and in a post to X this month, OpenAI’s Vice President, Nick Turley revealed that ChatGPT was on track to reach 700M weekly active users.
As an assistant professor of biomedical informatics at Vanderbilt University Medical Center in Nashville, Tennessee, Schafer’s work focuses on how technology can improve healthcare — particularly how tools like ChatGPT can be used to expand access to mental health support at scale.
Schafer understands the concerns, including reports of ChatGPT giving harmful advice to people in crisis, and she agrees that poorly trained models pose a real risk. But she’s equally concerned by calls from some mental health professionals to avoid using these tools for therapeutic purposes altogether.
“The reason that concerns me so much is that ChatGPT has the potential — with pros and cons of course — to have widespread, large-scale benefits to would-be therapy patients who need help,” she said.
Schafer added that obstacles like the stigma of seeking mental health treatment, the exorbitant cost of therapy, long wait times to get a therapist and the inability to find providers “are real barriers that stop people from getting care.”
“Chat GPT can be a solution to all these barriers,” Schafer said. “No one has to know that you are getting care from ChatGPT, you don’t have to pay for ChatGPT’s help, there is no wait time, and help is right at your fingertips whenever you need it.”
According to OpenAI, the company does not sell your data or use your chats for advertising and ChatGPT conversations are private by default — meaning no one else can see them. However, If you opt in to share data for model improvement, your chats may be used to train future versions, but your identity is never attached. You can control this in Settings > Data Controls by turning off chat history and training, which also prevents human review.
During an recent appearance on “This Past Weekend w/Theo Vonn,” Sam Altman, CEO of Open AI, the company that created ChatGPT, admitted he has concerns when it comes to people sharing their most personal thoughts with the AI chatbot, telling Vonn that while there is doctor/patient confidentiality when you talk to a human doctor or therapist about your problems and issues — that’s not the case with ChatGPT.
“We haven’t figured that out yet for when you talk to Chat GPT,” Altman said. “So, if you go talk to ChatGPT about your most sensitive stuff and then there’s like a lawsuit or whatever, we could be required to produce that. And I think that’s very screwed up. I think we should have the same concept of privacy for your conversations with AI that we do with a therapist.”
Altman made his stance on privacy clear.
“I think we really have to defend rights to privacy,” Altman said. “I don’t think those are absolute. I’m like totally willing to compromise some privacy for collective safety, but history is that the government takes that way too far and I’m really nervous about that.”
And as for safety, Schafer acknowledges that there’s currently no governing or licensing body to hold AI chatbots accountable if they provide harmful or misleading advice, so, the tech must be continually monitored and updated, or its performance may not stay consistent.
“We need to have a more nuanced discussion than just — is ChatGPT good or bad as a therapist,” Schafer said. “The question needs to be ‘Is ChatGPT better than nothing?’ Because without ChatGPT that is what a lot of people are getting.”
Some AI chatbots—like Wysa and Woebot—were created specifically to support mental health. Both use the principles of cognitive behavioral therapy (CBT) and offer features like mood tracking, daily check-ins, and guided conversations.
Woebot was developed in 2017 by clinical research psychologist Alison Darcy, who wanted to fill the gap when therapists aren’t available.
“It doesn’t matter how good a therapist is—you could be the best in the world—but unless you’re with your patient at 2 a.m. during a panic attack, you can’t help them in that moment,” Darcy said during a TED Talk in April 2025.
Initially launched on Facebook, Woebot later became a standalone app, reaching about 1.5 million users—most active between 2 a.m. and 5 a.m. As of June 2025, the direct-to-consumer version has been retired and access is now limited to those whose healthcare provider or employer partners with Woebot Health.
With around 4.5 million users, Wysa follows a similar model, partnering with health systems and employers. But unlike Woebot, Wysa still offers a free public version, with optional upgrades for expanded tools and human coaching.
“We have a free version of Wysa on the app store,” said Wysa spokesperson Sara Baldry. “The public at large can access this and if they like, they can upgrade to unlock all the self-help tools or to access human coaching.”
Neeley R. Hughey, a licensed mental health counselor, certified life coach, and founder of Coastal Wellness in Melbourne, Florida, has put several popular chatbots to the test.
“I think they can be quite helpful for immediate relief,” Hughey said. “They often teach useful CBT techniques like thought stopping, mindfulness, relaxation, and cognitive restructuring, which can be beneficial in moments of anxiety or depression.”
However, there is a downside. She believes they fall short in providing essential elements such as empathy, validation, genuine human connection and non-judgmental support.
“It’s also interesting to note that I’ve observed a certain bias when AI deals with alcohol or drug-related issues,” Hughey said. “Ultimately, for complex trauma, addressing the root causes of mental health issues, and challenging a client’s thought patterns, a human therapist is truly needed.”
In a post to X on Sunday, Sam Altman acknowledged that many people are forming unusually strong attachments to AI models.
“A lot of people effectively use ChatGPT as a sort of therapist or life coach, even if they wouldn’t describe it that way,” Altman wrote. “This can be really good! A lot of people are getting value from it already today. If people are getting good advice, leveling up toward their own goals, and their life satisfaction is increasing over years, we will be proud of making something genuinely helpful.”
However, he stressed the need to avoid harm — especially for those in a “mentally fragile state and prone to delusion.” Adding that while most users can separate reality from role-play, “a small percentage cannot.” And he noted, the dangers aren’t limited to extreme cases, explaining that some users may feel better after interacting with ChatGPT, but could unknowingly be nudged away from their long-term well-being — or find themselves unable to cut back on its use.
“I can imagine a future where a lot of people really trust ChatGPT’s advice for their most important decisions,” Altman wrote. “Although that could be great, it makes me uneasy. But I expect that it is coming to some degree, and soon billions of people may be talking to an AI in this way.”
To ensure AI becomes “a big net positive,” Altman said OpenAI aims to measure its impact more effectively than past technologies. That includes engaging with users about their short and long-term goals and ensuring the AI can handle “sophisticated and nuanced issues” responsibly.
Have you used an AI chatbot for therapy, advice or coping skills? We’d love to hear your stories about how it’s worked — or hasn’t. Please email Jennifer Torres at JMTorres@Gannett.com.
This reporting is supported by a Journalism Funding Partners grant. Mental Health Reporter Jennifer Torres can be reached at JMTorres@gannett.com.