People Are Using AI Chatbots for Health Advice: Here's What to Know – MedPage Today

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
March 3, 2026 • 4 min read
With hundreds of millions of people turning to chatbots for advice, it was only a matter of time before tech companies began offering programs specifically designed to answer health questions.
In January, OpenAI introduced ChatGPT Health, a new version of its chatbot that the company says can analyze users’ medical records, wellness apps, and wearable device data to answer health and medical questions. Currently, there’s a waiting list for the program. Anthropic, a rival artificial intelligence (AI) company, offers similar features for some users of its Claude chatbot.
Both companies say their programs, known as large language models, aren’t a substitute for professional care and shouldn’t be used to diagnose medical conditions. Instead, they say the chatbots can summarize and explain complicated test results, help prepare for a doctor’s visit, or analyze important health trends buried in medical records and app metrics.
Chatbots Can Offer More Personalized Information Than a Google Search
Some doctors and researchers who have worked with ChatGPT Health and similar programs see them as an improvement over the status quo.
AI platforms are not perfect — they can sometimes hallucinate or provide bad advice — but the information they produce is more likely to be personalized and specific than what patients might find through a Google search.
“The alternative often is nothing, or the patient winging it,” said Robert Wachter, MD, a medical technology expert at University of California San Francisco. “And so I think that if you use these tools responsibly, I think you can get useful information.”
One advantage of the latest chatbots is that they answer users’ questions with context from their medical history, including prescriptions, age, and doctor’s notes.
Even if a person hasn’t given AI access to their medical information, Wachter and others say giving the chatbots as many details as possible improves responses.
AI Is Not for Medical Emergencies
Wachter and others stressed that there are situations when people should skip the chatbot and seek immediate medical attention. Symptoms such as shortness of breath, chest pain, or a severe headache could signal a medical emergency.
Even during less urgent situations, patients and doctors should approach AI programs with “a degree of healthy skepticism,” said Lloyd Minor, MD, of Stanford University.
“If you’re talking about a major medical decision, or even a smaller decision about your health, you should never be relying just on what you’re getting out of a large language model,” said Minor, who is the dean of Stanford’s medical school.
Medical Chatbots and Privacy Concerns
Many benefits offered by AI bots stem from users sharing personal medical information. But anything shared with an AI company isn’t protected by the federal privacy law that normally governs sensitive medical information — the Health Insurance Portability and Accountability Act of 1996 (HIPAA).
HIPAA allows for fines and even prison time for doctors, hospitals, insurers, or other health services that disclose medical records. But the law doesn’t apply to companies that design chatbots.
“When someone is uploading their medical chart into a large language model, that is very different than handing it to a new doctor,” said Minor. “Consumers need to understand that they’re completely different privacy standards.”
Both OpenAI and Anthropic say users’ health information is kept separate from other types of data and is subject to additional privacy protections. The companies do not use health data to train their models. Users must opt in to share their information and can disconnect at any time.
Testing Shows Chatbots Can Stumble
Despite excitement surrounding AI, independent testing of the technology is in its infancy. Early studies suggest programs like ChatGPT can ace high-level medical exams but often stumble when interacting with humans.
A 1,300-participant study by Oxford University recently found that people using AI chatbots to research hypothetical health conditions didn’t make better decisions than people using online searches or personal judgment.
AI chatbots presented with medical scenarios in a comprehensive, written form correctly identified the underlying condition 95% of the time.
“That was not the problem,” said lead author Adam Mahdi, PhD, of the Oxford Internet Institute. “The place where things fell apart was during the interaction with the real participants.”
Mahdi and his team found several communication problems. People often didn’t give the chatbots the necessary information to correctly identify the health issue. Conversely, the AI systems often responded with a combination of good and bad information, and users had trouble distinguishing between the two.
The study, conducted in 2024, did not use the latest chatbot versions, including new offerings like ChatGPT Health.
A Second AI Opinion Can Be Helpful
The ability for chatbots to ask follow-up questions and elicit key details from users is one area where Wachter sees room for improvement.
“I think that’s when this will get really good, when the tools become a little bit more doctor-ish in the way they go back and forth” with patients, Wachter said.
Consulting with multiple chatbots — similar to getting a second opinion from another doctor — may give users more confidence about the information provided.
“I will sometimes put information into ChatGPT and information into Gemini,” Wachter said, referencing Google’s AI tool. “And when they both agree, I feel a little bit more secure that that’s the right answer.”
The material on this site is for informational purposes only, and is not a substitute for medical advice, diagnosis or treatment provided by a qualified health care provider.
© 2005–2026 MedPage Today, LLC, a Ziff Davis company. All rights reserved.
MedPage Today is among the federally registered trademarks of MedPage Today, LLC and may not be used by third parties without explicit permission.

source

Scroll to Top