Using an AI chatbot for therapy or health advice? Experts want you to know these 4 things – PBS

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Stand up for truly independent, trusted news that you can count on!
Laura Santhanam Laura Santhanam
Leave your feedback
As chatbots powered by artificial intelligence explode in popularity, experts are warning people against turning to the technology for medical or mental health advice instead of relying upon human health care providers.
There have been a number of examples in recent weeks of chatbot advice misleading people in harmful ways. A 60-year-old man accidentally poisoned himself and entered a psychotic state after ChatGPT suggested he eliminate salt, or sodium chloride, from his diet and replace it with sodium bromide, a toxin used to treat wastewater, among other things. Earlier this month, a study from the Center for Countering Digital Hate revealed ChatGPT gave teens dangerous advice about drugs, alcohol and suicide.
The technology can be tempting, especially with barriers to accessing health care, including cost, wait times to talk to a provider and lack of insurance coverage. But experts told PBS News that chatbots are unable to offer advice tailored to a patient’s specific needs and medical history and are prone to “hallucinations,” or giving outright incorrect information.
Here is what to know about the use of AI chatbots for health advice, according to mental health and medical professionals who spoke to PBS News.
People are typically turning to commercially available chatbots, such as OpenAI’s ChatGPT or Luka’s Replika, said Vaile Wright, senior director of Health Care Innovation at the American Psychological Association.
People may ask questions as far-ranging as how to quit smoking, address interpersonal violence, confront suicidal ideation or treat a headache. More than half of teens said they used AI chatbot platforms multiple times each month, according to a survey produced by Common Sense Media. That report also mentioned that roughly a third of teens said they turned to AI companions for social interaction, including role-playing, romantic relationships, friendship and practicing conversation skills.
READ MORE: Analysis: AI in health care could save lives and money — but not yet
But the business model for these chatbots is to keep users engaged “as long as possible,” rather than give trustworthy advice in vulnerable moments, Wright said.
“Unfortunately, none of these products were built for that purpose,” she said. “The products that are on the market, in some ways, are really antithetical to therapy because they are built in a way that their coding is basically addictive.”
Often, a bot reflects the emotions of the human who is engaging them in ways that are sycophantic and could “mishandle really critical moments,” said Dr. Tiffany Munzer, a developmental behavioral pediatrician at the University of Michigan Medical School.
“If you’re in a sadder state, the chatbot might reflect more of that emotion,” Munzer said. “The emotional tone tends to match and it agrees with the user. it can make it harder to provide advice that is contrary to what the user wants to hear.”
Asking AI chatbots health questions instead of asking a health care provider comes with several risks, said Dr. Margaret Lozovatsky, chief medical information officer for the American Medical Association.
Chatbots can sometimes give “a quick answer to a question that somebody has and they may not have the ability to contact their physicians,” Lozovatsky said. “That being said, the quick answer may not be accurate.”
It is not unusual for people to seek answers on their own when they have a persistent headache, sniffles or a weird, sudden pain, Lozovatsky said. Before chatbots, people relied on search engines (cue all the jokes about Dr. Google). Before that, the self-help book industry minted money on people’s low-humming anxiety about how they were feeling today and how they could feel better tomorrow.
Today, a search engine query may produce AI-generated results that show up first, followed by a string of websites that may or may not have information reflected accurately in those responses.
“It’s a natural place patients turn,” Lozovatsky said. “It’s an easy path.”
That ease of access stands in contrast to the barriers patients often encounter when trying to get advice from licensed medical professionals. Those obstacles may include whether or not they have insurance coverage, if their provider is in-network, if they can afford the visit, if they can wait until their provider can be scheduled to see them, if they are concerned about stigma related to their question, and if they have reliable transportation to their provider’s office or clinic when telehealth services are not an option.
Any one of these hurdles may be enough to motivate a person to feel more comfortable asking a bot their sensitive question than a human, even if the answer they receive could potentially endanger them. At the same time, a well-documented loneliness epidemic nationwide is partially fueling a rise in the use of AI chatbots, Munzer said.
“Kids are growing up in a world where they just don’t have the social supports or social networks that they really deserve and need to thrive,” she said.
If people are concerned about whether their child, family member or friend is engaging with a chatbot for mental health or medical advice, it is important to reserve judgment when attempting to have a conversation about the topic, Munzer said.
“We want families and kids and teens to have as much info at their fingertips to make the best decisions possible,” she said. “A lot of it is about AI and literacy.”
Discussing the underlying technology driving chatbots and motivations for using them can provide a critical point of understanding, Munzer said. This could include asking about why they are becoming such an increasing part of daily living, the business model of AI companies, and what else could support a child or adult’s mental wellbeing.
A helpful conversation prompt Munzer suggested for caregivers is to ask, “What would you do if a friend revealed they were using AI for mental health purposes?” That language “can remove judgement,” she said.
One activity Munzer recommended is for families to test out AI chatbots together, talk about what they find and encourage loved ones, especially children, to look for hallucinations and biases in the information.
WATCH: What to know about an AI transcription tool that ‘hallucinates’ medical interactions
But the responsibility to protect individuals from chatbot-generated harm is too great to place on families alone, Munzer said. Instead, it will require regulatory rigor from policymakers to avoid further risk.
On Monday, the Brookings Institution produced an analysis of 2025 state legislation and found “Health care was a major focus of legislation, with all bills focused on the potential issues arising from AI systems making treatment and coverage decisions.” A handful of states, including Illinois, have banned the use of ChatGPT to generate mental health therapy. A bill in Indiana would require medical professionals to tell patients they are using AI to generate advice or inform a decision to provide health care.
One day, chatbots could fill gaps in services, Wright said — but not yet. Chatbots can tap into deep wells of information, she said, but that does not translate to knowledge and discernment.
“I do think you’ll see a future where you do have mental health chatbots that are rooted in the science, that are rigorously tested, they’re co-created for the purposes, so therefore, they’re regulated,” Wright said. “But that’s just not what we have currently.”
Stand up for truly independent, trusted news that you can count on!
Left: Turning to AI chatbots for advice can be tempting, especially with barriers to accessing health care, including cost, wait times to talk to a provider and lack of insurance coverage. But experts told PBS News that people need to be aware of the possible risks involved. Photo illustration by Getty Images
By Associated Press
By Matt O’Brien, Associated Press
By Garance Burke, Associated Press
By Kelvin Chan, Associated Press
By Matt O’Brien, Associated Press
Laura Santhanam Laura Santhanam
Laura Santhanam is the Health Reporter and Coordinating Producer for Polling for the PBS NewsHour, where she has also worked as the Data Producer. Follow @LauraSanthanam
Support Provided By: Learn more
Subscribe to Here’s the Deal, our politics newsletter for analysis you won’t find anywhere else.
Thank you. Please check your inbox to confirm.
© 1996 – 2025 NewsHour Productions LLC. All Rights Reserved.
PBS is a 501(c)(3) not-for-profit organization.
Sections
About
Stay Connected
Subscribe to Here’s the Deal with Lisa Desjardins
Thank you. Please check your inbox to confirm.
Support for News Hour Provided By