What everyone should know before asking ChatGPT for medical advice – The Independent

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Notifications can be managed in browser preferences.
Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in
Swipe for next article
More and more patients are turning to AI chatbots to answer their medical queries – but how safe is the health advice they’re receiving? And can the way you ask a question make all the difference? Katie Rosseinsky asks the experts
Removed from bookmarks
When Alexandra Watson has a question about her heart condition, her first port of call is Chad. That’s not the name of her cardiologist – rather, it’s her nickname for ChatGPT, which she has been using for the past couple of years to check her symptoms.
Her condition is a rare one, and she says that the LLM (large language model) “cuts through the noise” to provide readable and easily understandable information. “I couldn’t get my cardiologist to spend this time talking me through every question I have on the subject,” she says. “But using AI “allows me to deep dive and talk hypothetically. Doctors are dismissive, Google just scares you, but Chad is helpful.”
In January, a report from OpenAI, the tech giant behind ChatGPT, claimed that more than 40 million people around the world use the bot for health advice every single day, accounting for more than 5 per cent of messages sent to it globally. And last year, research from healthcare champion Healthwatch found that 9 per cent of men and 7 per cent of women across England are using AI chatbots for medical queries.
For Watson, the fact that the chatbot can keep track of previous issues she has asked about, to give her a more comprehensive picture, is a bonus. It references her heart queries, for example, when she asks other health-related questions.
She’s aware, though, that “Chad” can have a propensity to flatter; it’s not necessarily one for tough love. “[It] wants to make me feel good about myself,” she says, noting that when she “asked about suitable diets the other day”, it mentioned that she “needed to take it easy” after an operation almost two years ago, and told her “to be kind to myself” during menopause.
Carole Railton is another convert. “I use ChatGPT most days with my work or for travel arrangements,” she says. “It seemed natural to use it for [other aspects] of my life, including medical information.” Like Watson, she has a heart condition. Her regular check-ups, she says, sometimes seem like a tick sheet from the medical profession. So when she had some things going on with her body that she was not sure about, her first port of call was ChatGPT.
The chatbot also proved useful when she was planning an international trip, directing her to get a “fit to fly” note in order to travel with her medication. Its cheerful tone makes all the difference, too. “If a human was as knowledgeable and as nice, I would make a beeline for them,” she says.
Given that they can be informative, convenient, and surprisingly personable, it is perhaps unsurprising that so many of us are asking AI bots for health guidance. They might seem friendlier and less alarmist than “Dr Google” – and can be easier to get hold of than your GP. But most of these programmes were not designed to dole out medical advice, and their small-print terms and conditions will tend to remind users of this. ChatGPT’s guidelines, for example, state that it is “not intended for use in the diagnosis or treatment of any health condition”.
But when we’re actually in the thick of a back-and-forth with a bot, it can be easy to forget this. A recent study by researchers at Stanford and Berkeley found that disclaimers and warnings in response to health questions notably decreased on LLMs between 2022 and 2025, dropping from 26.3 per cent to 0.97 per cent.
Like all LLMs, chatbots are notoriously prone to errors and hallucinations, when they generate factually incorrect or misleading information by predicting a pattern. Last year, for example, an American medical journal reported the case of a 60-year-old man who started replacing salt in his diet with sodium bromide after consulting ChatGPT. He ended up in psychiatric care after suffering from paranoia and hallucinations, the result of his overexposure to bromides.
Then there is the question of data privacy, an issue that many of us choose to ignore in favour of convenience in the moment. What happens to the health information we are sharing with Big Tech? And with all this in mind, should we be proceeding with far greater caution?
We used to talk about ‘Dr Google’. This is a more conversational version, which makes it feel more like speaking to a real healthcare professional
OpenAI has, perhaps inevitably, framed its chatbot as an “important ally” in helping patients to “self-advocate” and navigate the healthcare system, especially in the United States, where that process can be complex and fragmented. In January, it rolled out ChatGPT Health for a limited group of users. This feature allows users to connect their health information, such as medical records or data from apps like Apple Health or MyFitnessPal, so that they can receive more personalised responses in their chats.
At the time, the company said this latest development was designed to “support, not replace, medical care”, and explained that health information would be stored separately from other chats. It’s currently unavailable in the UK, the European Economic Area and Switzerland, however, due to tighter restrictions around digital privacy.
Last month, a study published in the journal Nature Medicine tested the chatbot on 60 medical scenarios, changing various conditions such as the patient’s gender or race, or adding test results and comments from family. The researchers found that while ChatGPT Health performed well in “textbook emergencies”, where patients reported unmistakeable symptoms, it floundered elsewhere.
In 51.6 per cent of cases where the patient needed to immediately head to hospital, the chatbot advised them to stay at home or wait for a routine appointment. “ChatGPT Health is most reliable when the clinical decision is least consequential, and least reliable when it matters most,” lead researcher Ashwin Ramaswamy told the British Medical Journal.
When The Independent contacted OpenAI, they told us that they welcome independent research around AI healthcare systems, but claimed that the study doesn’t reflect how people typically tend to use ChatGPT Health, or how it is designed to work in real-life scenarios. They added that they are continuing to improve the safety and reliability of the programme through testing and feedback before rolling it out more broadly.
Of course, the act of trying to access health-related information online is nothing new. Who among us can honestly say that they’ve never trawled the web to learn more about some apparently minor symptom, only to steadily convince themselves that said symptom is in fact some dreadful harbinger of doom? “We used to talk about ‘Dr Google’,” says Dr Sonia Szamocki, a former NHS doctor who is now founder and CEO of AI healthtech company 32Co. “This is a more conversational version, which makes it feel more like speaking to a real healthcare professional.
“What people are trying to solve is not a new problem, which is that it’s hard to get access to doctors,” continues Szamocki. “Waiting lists are high, and that’s if you want to just get to a GP.” It is even harder to get more specialist knowledge, she notes. “That’s because there are even more obstacles in the way. So it’s completely natural that people go online to try and get the information that they’re struggling to get.”
Consulting an LLM is not the same as looking up the answer in a book, or even searching Google, which is essentially “pulling a fact out and presenting that to you on a plate”, Szamocki says. Instead, LLMs are “pattern recognisers”, she explains. “They are probabilistic mechanisms to find the most likely answer to a question, [and have learnt from billions of texts to] try to predict what’s the next best word in a series of words.”
And, crucially, “You can’t be 100 per cent sure, if you ask it something, that it will retrieve exactly the right fact.” That, Szamocki adds, is “really where the worry comes from”.
Plus, an LLM will tend to try and be extremely helpful even when it doesn’t 100 per cent know the answer. These platforms have a habit of prioritising helpfulness over, say, accuracy, argues Szamocki. Hallucinations, she says, can occur “where [an LLM] is trying to fill a gap in knowledge but saying, ‘Look, it’s probably this.’”
The way your prompt is written can have an impact on the response you receive. When you send a message or question to a chatbot, you’re putting in what you think is important, notes Dr Caroline Pilot, acting chief medical officer for digital clinic HealthHero. “So the prompt is biased in the first place.” Also, you might inadvertently leave out key information that a doctor would ask you about. “When I’m consulting with someone, I let them tell me what they think is important,” she explains. But she is also wondering: “OK, but did they have this other thing that they didn’t mention?”
To work around all this, chatbot fan Alexandra Watson says she always asks for sources and requests a cross-check when she presents ChatGPT with a medical question.
Are doctors concerned about how “Dr ChatGPT” might be changing the way their patients are seeking medical advice? “I know lots of clinicians mind, but I really don’t mind if people have done their homework and asked a chatbot,” Dr Pilot says. “I find it interesting to have the conversation and explore their fears and concerns, and what the chatbot said.”
But it can depend on the patient, she says. If someone has a fixed idea as to what their problem might be, they might already be scared of whatever it was that the internet said it was.
Professor Victoria Tzortziou-Brown is chair of the Royal College of General Practitioners. “It’s encouraging to see patients being curious about their health,” she says. But she cautions that chatbots are not without risks. “It’s not always clear where the information is being drawn from, or how accurate it is,” she explains, adding that the results could therefore contain content that is neither evidence-based nor trustworthy.
Even the most reputable AI providers rarely allow users to choose how long their health-related data is retained for
There is “huge potential” for technology to support patients, she says. “But this will always need to work alongside, and complement, the work of doctors and other healthcare professionals.”
And it is important to bear in mind that handing over our health information to LLMs can introduce significant data privacy risks. Dr Aaisha Makkar, a lecturer in computer science at the University of Derby, specialises in ethical privacy-preserving technologies. “Many AI systems store user input in cloud environments, where models may iteratively learn from the data,” she says. But this process is not always guaranteed to follow strict anonymisation standards.
Plus, sometimes LLMs can “infer or reconstruct sensitive personal details from underlying patterns in the data”, even if users have tried to steer away from obvious identifiers. Most of us, Makkar notes, will have little idea about how our data is processed behind the scenes. “Even the most reputable AI providers rarely allow users to choose how long their health-related data is retained for.”
She advises, therefore, that we should turn to chatbots “only for general medical guidance, rather than for personalised medical advice that requires sharing detailed health information”.
Pilot, meanwhile, is asked “all the time” whether AI will replace doctors. “I don’t see that it will replace them,” she says. “I think that it will aid them, and that they will use it as a consulting tool.”
And however friendly and eager to please it might seem, says Tzortziou-Brown, an AI chatbot cannot replace a conversation with a clinician who knows the patient, understands the context, and can make safe, evidence-based decisions.
Join thought-provoking conversations, follow other Independent readers and see their replies
Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in

source

Scroll to Top