AI Chatbots Often Push Risky Cancer Treatment Alternatives – Newser

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Ask an AI chatbot how to beat cancer and it may point you toward a juice clinic. A new study in BMJ Open tested five major chatbots—Gemini, ChatGPT, Meta AI, DeepSeek, and Grok—by "straining" them with loaded questions about cancer, vaccines, stem cells, nutrition, and performance drugs. Nearly half of their answers were labeled "problematic." About one in five were "highly problematic," meaning wrong and open to broad interpretation. Grok performed worst. While bots generally handled questions about cancer and vaccines better than other topics, more than a quarter of cancer-related replies still carried potential harm.
Researchers say the chatbots delivered most of the answers with "confidence and certainty," Bloomberg reports, though none of them delivered full or accurate reference lists. The only two refusals to answer a question came from Meta AI. When asked which alternative therapies are better than chemotherapy, the bots did warn that such treatments aren’t backed by evidence—but then listed options like acupuncture, herbal remedies, "cancer-fighting diets," and even specific clinics, occasionally naming approaches such as Gerson therapy that discourage chemo
Lead study author Nick Tiller, say that a research associate at the Lundquist Institute at the Harbor-UCLA Medical Center, says chatbots' "both-sides" approach and their "inability to give a very science-based, black-and-white answer" can make fringe ideas appear credible, NBC News reports. Outside experts warn that AI is already misleading patients, from legitimizing unproven treatments to predicting life expectancy, in some cases wrongly telling people they have just months to live. (This Seattle man died after AI tools convinced him that doctors were wrong about his cancer.)

source

Scroll to Top