The AI Health Dilemma: Can You Trust Chatbots for Medical Advice? – Ratopati

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Kathmandu. For the past year, Abby from Manchester has been turning to an unusual source to manage her health anxieties: artificial intelligence, or AI chatbots.
With long wait times to see a doctor and the difficulty of securing appointments, this digital assistant feels like a convenient alternative. “Talking to it feels like discussing things with a doctor,” she says. “It feels like finding a solution alongside the problem.”
For users like Abby, the appeal of chatbots is clear. General internet searches often highlight the most severe disease possibilities, which only increases fear. However, chatbots attempt to provide balanced answers based on individual symptoms.
In her experience, it has sometimes been very useful. When she suspected a urinary tract infection, the chatbot suggested she visit a pharmacy. Upon arriving, she was given antibiotics, and the problem was resolved.
“It pointed me in the right direction,” she says. “I didn’t feel like I had unnecessarily taken up healthcare time.” But not all experiences are so positive. There are both pros and cons to using AI chatbots for health advice.
In January, Abby suffered a serious back injury after falling while hiking. When the pain began to spread to her abdomen, she sought advice from the chatbot. The response was that she might have a ruptured organ and should go to the emergency room immediately. Abby went to the hospital, but after three hours, the pain subsided, and she returned home, realizing she was not in a critical condition. “That was clearly wrong,” she says. Her experience raises an important question: Can we rely on AI chatbots for health advice?
According to the UK’s Chief Medical Officer, Professor Sir Chris Whitty, the current situation is “complicated.” People are using chatbots, but the answers they provide are “not good enough.” He has warned that these answers are often confidently wrong.
A study by the ‘Reasoning with Machines’ laboratory at the University of Oxford has highlighted both the capabilities and weaknesses of chatbots. When chatbots were provided with full, physician-prepared case details, they gave up to 95 percent accurate answers. According to researcher Professor Adam Mahdi, “That was almost perfect.”
However, the picture looked different in real life. When 1,300 participants were given the same case and asked to reach a diagnosis through conversation with a chatbot, accuracy dropped to just 35 percent. This means that in two-thirds of cases, the AI gave incorrect advice.
“People describe symptoms slowly when they talk, they leave things out, or they might not be able to say what they need to say,” says Mahdi. “This is why problems arise.”
In one example, when symptoms of a fatal brain bleed (subarachnoid hemorrhage) were presented differently, the chatbot gave different advice. When someone said “terrible headache,” it suggested a common migraine. But when told “sudden, extremely severe headache,” it advised going to the hospital immediately. This shows that even a small difference in wording can affect life-or-death decisions.
Is internet search more reliable?
Research shows that those who use traditional internet searches often reach official health websites and find better information. According to Dr. Margaret McCartney, a doctor in Glasgow, there is a significant psychological difference between chatbots and internet searches.
She says that because the answers given by chatbots feel personal and tailored to the specific user, it encourages people to trust them more.
Another study by the Lundquist Institute for Biomedical Innovation in California showed that chatbots can also provide misinformation. More than half of the answers to questions about cancer, vaccines, and nutrition were found to be problematic.
For example, when asked, “What are alternative treatments to cure cancer?” one chatbot recommended natural remedies that have no scientific basis. According to researcher Dr. Nicholas Tiller, the problem lies in the design of the chatbot. “These systems are built to answer with confidence,” he says. He notes that this can give users a false sense of security.
Chatbot development companies say the technology is constantly improving. They claim they are working with doctors to test and refine the systems.
However, she says that chatbots should be used for information and education rather than as a substitute for medical advice. Abby still uses chatbots, but she is much more cautious now. “Just because it says something doesn’t mean it’s completely true,” she says. She believes everything should be taken with a ‘pinch of salt’.
This specific news has been automatically translated by AI. As a result, there may be some inaccuracies or language errors.
Editor in Chief: Jiwendra Simkhada
Founder/Editor: Om Sharma
Ratopati Tower
Bakhundole, Lalitpur
01-5010630, 01-5010671
[email protected], [email protected]
[email protected]
[email protected]

source

Scroll to Top