#Chatbots

Should you trust ChatGPT for medical advice? What experts say about AI in healthcare – theweek.in

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
People are increasingly seeking health advice from AI chatbots like ChatGPT, Gemini, etc. However, critical risks remain regarding the accuracy, the potential for misinformation, and more.
Who would have thought that for every minor inconvenience or health concern – be it a skin rash, a persistent cough, or a random ache—we’d be turning to ChatGPT for answers? In an era where artificial intelligence is at our fingertips, AI chatbots are fast becoming the new front line for health queries.  
Patients are no longer just relying on doctors or symptom-checker websites but consulting large language models (LLMs) such as ChatGPT and Gemini for advice on symptoms, diagnoses, and even treatment options. This shift has sparked global debate, raising questions about accuracy, access, ethics, and the evolving relationship between patients and medical professionals. 
A 2024 Australian study published in the Medical Journal of Australia found that 9.9% of adults had used ChatGPT to obtain health-related information within the first six months of its widespread availability. Usage was higher among individuals with low health literacy, who lived in capital cities or for whom English was not the first language. 
For some, AI offers reassurance, especially when used with caution. Simran Mudgal, a Delhi-based journalist, said she turned to ChatGPT out of curiosity after experiencing post-recovery pain from tuberculosis.  
“Doctors had already said the pain wasn’t serious, but I wanted to see what ChatGPT would say. It gave similar explanations, nothing alarming, so I felt reassured,” she said. “Had it suggested something drastic, I would’ve consulted a doctor immediately. But since the information felt harmless, I took it with a pinch of salt.” 
Studies, however, have shown more mixed results when it comes to whether it is an effective diagnostic tool. There is no doubt that AI chatbots offer a number of potential benefits in healthcare settings. They can generate fast answers, reduce the burden on overworked medical staff, and serve as an educational resource for patients. A 2023 study in Scientific Reports showed that AI tools could simplify complex health literature into more readable formats, particularly for people with limited literacy or medical knowledge.
However, a separate study published in JAMA Network Open evaluated physicians, including those using GPT‑4 alongside traditional diagnostic tools, and found that “the use of an LLM did not significantly enhance diagnostic reasoning performance compared with the availability of only conventional resources.” 
Despite their promise, AI chatbots also come with considerable risks. According to a study published in PLOS ONE, ChatGPT is not accurate enough to be used as a standalone diagnostic tool. The researchers evaluated ChatGPT 3.5 using 150 real-world clinical case vignettes sourced from Medscape. While the model provided the correct diagnosis in its top answer for 49% of cases, its overall diagnostic accuracy across multiple-choice options was 74%. The study further cautions that ChatGPT often struggles with interpreting laboratory values and imaging results, and it may overlook key clinical details.  
“ChatGPT in its current form is not accurate as a diagnostic tool. ChatGPT does not necessarily give factual correctness, despite the vast amount of information it was trained on. Based on our qualitative analysis, ChatGPT struggles with the interpretation of laboratory values, imaging results, and may overlook key information relevant to the diagnosis. However, it still offers utility as an educational tool,” the authors warned in the July 2024 paper. 
Doctors like Dr Gita Prakash, an Internal Medicine Specialist based in Delhi, acknowledge the growing presence of AI tools like ChatGPT in patient behaviour, but stresses the irreplaceable value of human interaction in medicine.  
“Earlier, patients would often Google their symptoms before visiting. Now, they have options like ChatGPT or other chatbots,” she said. “There’s nothing inherently wrong with AI in healthcare, but it works with limited input. It doesn’t know your full medical history, family background, or pre-existing conditions like diabetes or hypertension.” 
Dr Prakash explains that medical diagnosis is often shaped by subtle cues, something AI can’t fully grasp.  
“When we examine a patient, we ask questions dynamically—based on what we see, what we sense, and even what we feel emotionally,” she said. ”A chatbot can’t assess where the pain is, how severe it feels, or whether it’s linked to stress or anxiety. That kind of nuance comes from physical presence.” 
Other concerns arise from how people interact with AI-generated health advice.  A recent paper in the Journal of Medical Artificial Intelligence revealed that while AI tools like ChatGPT may enhance efficiency and assist in clinical workflows, they also pose critical risks. These include overreliance on AI without adequate human oversight, the use of biased or incomplete datasets, and ethical uncertainties around data privacy, consent, and accountability. 
Dr Prakash also warned that incomplete information fed into AI systems could result in false or misleading conclusions. According to her, while such tools may not be inherently dangerous, relying on them without medical supervision can delay or distort diagnosis. In her view, in-person consultations provide an irreplaceable sense of confidence, empathy, and trust, elements that AI cannot replicate. 
Moreover, the use of AI in clinical settings is creating tension in doctor-patient relationships. A widely reported case from Australia involved a patient filing a formal complaint after their physician openly used ChatGPT during a consultation. The incident raised questions about professionalism and transparency in how AI tools are integrated into practice.  
Experts agree that while AI can support clinical decision-making, it must be used under human supervision. A 2023 report by the National Center for Biotechnology Information emphasises the need for clear regulatory frameworks, transparency, and clinician training when implementing AI in healthcare settings.  
Similarly, a 2025 study published in  AI in Health and Medicine stressed that AI tools need rigorous evaluation for safety and efficacy before being widely deployed in patient-facing applications.  
The emergence of AI chatbots in medicine presents a compelling mix of opportunity and caution. AI could help fill gaps in the healthcare system and improve outcomes for patients if used thoughtfully as a tool, not a replacement. But without guardrails, training, and oversight, it risks becoming just another source of misinformation in an already crowded digital landscape. 
This story is done in collaboration with First Check, which is the health journalism vertical of DataLEADS.
Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp
Rumoured daughter of Vladimir Putin, Elizaveta Krivonogikh, claims someone ‘destroyed’ her and ‘took millions of lives’
Messi’s Kerala match row | ‘Argentina cheated us’: Sponsor threatens AFA with legal measures
‘Su from So’ worked with audiences without emulating what ‘Kantara’ did: Raj B Shetty on making ‘authentic’ crowd-pleasing entertainers
‘Move away from subsidies, introduce mandates’: NITI Aayog charts new plan for faster EV adoption in India
Instagram’s new guide: Reels, Stories… how to pick the perfect content format
Bargachia and Bandpur, Bengal | How Rahul Mishra nailed the pre-stitched sari for Zendaya
Shattered Lands: Sam Dalrymple’s debut book revisits history of Indian empire and partitions
A near-death experience transforms a couple’s life
Evolving luxury
*Articles appearing as INFOCUS/THE WEEK FOCUS are marketing initiatives
Copyright © 2024 All rights reserved

source

Should you trust ChatGPT for medical advice? What experts say about AI in healthcare – theweek.in

Too many em dashes? Spotting text written

Should you trust ChatGPT for medical advice? What experts say about AI in healthcare – theweek.in

Quora’s Poe now lets users create and