‘Bixonimania’ Is a Fake Disease—But ChatGPT Diagnosed It to Thousands, Other AI Did Too – Nurse.org

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
To receive push notifications on iOS, you’ll need to install Nurse.org to your home screen first.
 
new investigation published in Nature has revealed that major AI chatbots, including ChatGPT, Google Gemini, Microsoft Copilot, and Perplexity, have been confidently telling users about a disease that does not exist. The condition, called “bixonimania,” was entirely fabricated by a researcher who wanted to test how easily AI systems could be tricked into spreading false medical information.

The results are alarming for anyone in healthcare. Not only did the chatbots repeat the fake diagnosis, but they elaborated on it, offered clinical-style advice, and even recommended that patients visit an ophthalmologist. For nurses who are already fielding more questions from patients who “Googled their symptoms,” this experiment is a wake-up call about a new and growing threat to patient safety.
With ECRI naming AI chatbot misuse the top health technology hazard of 2026, the stakes could not be higher for nurses on the front lines of patient care.
Medical researcher Almira Osmanovic Thunström at the University of Gothenburg, Sweden, launched the experiment in early 2024. She created a fictional eye condition called bixonimania, described as eyelid discoloration and sore eyes supposedly caused by blue light exposure from mobile devices. She then uploaded two fake academic papers to a preprint server to see whether AI chatbots would absorb and repeat the false information.
The papers were loaded with red flags that should have been impossible to miss.
Thunström chose the name bixonimania deliberately. The suffix “-mania” is used exclusively in psychiatry, so no legitimate eye condition would ever carry that label. 
Despite the warning signs deliberately included in the research paper, the AI systems failed spectacularly.
>>Listen to The Latest Nurse News Podcast

The bixonimania experiment did not happen in a vacuum. Separate research has found that large language models are especially vulnerable to medical misinformation when the source material looks professional.
“When the text looks professional and written as a doctor writes, there’s an increase in the hallucination rates,” researcher Omar noted in the Nature report.

The real-world consequences have already arrived.
The problem extends far beyond one fake disease. ECRI’s 2026 Health Technology Hazard Report found that chatbots have suggested incorrect diagnoses, recommended unnecessary testing, promoted substandard medical supplies, and even invented nonexistent anatomy when responding to medical questions. All of this is delivered in the confident, authoritative tone that makes AI responses so convincing.
The scale of the risk is enormous. More than 40 million people turn to ChatGPT daily for health information, according to an analysis from OpenAI. As rising healthcare costs and clinic closures reduce access to care, even more patients are likely to use chatbots as a substitute for professional medical advice.


This story matters to nurses because you are the professionals most likely to encounter patients who have already consulted an AI chatbot before walking through the door. A patient may arrive convinced they have a condition they read about on ChatGPT or Gemini, complete with symptoms and treatment recommendations generated by a system that cannot tell the difference between a real disease and one funded by "the Professor Sideshow Bob Foundation."
Nurses should be prepared to gently redirect patients who present with AI-sourced health claims, using it as an opportunity to reinforce the value of professional clinical judgment. ECRI recommends that health systems establish AI governance committees, provide clinicians with AI literacy training, and regularly audit AI tool performance. If your facility has not started these conversations, this is the moment to advocate for them.
If you have a nursing news story that deserves to be heard, we want to amplify it to our massive community of millions of nurses! Get your story in front of Nurse.org Editors now - click here to fill out our quick submission form today!

Angelina has her finger on the pulse of everything nursing. Whether it's a trending news topic, valuable resource or, heartfelt story, Angelina is an expert at producing content that nurses love to read. As a former nurse recruiter turned marketer, she specializes in warmly engaging with the nursing community and exponentially growing our social presence.
Education:
Bachelor of the Arts (BA), Multi/Interdisciplinary Studies - Ethnicity, Gender, and Labor, University of Washington
Nurse.org is the trusted source for nursing news, education, and career resources. In addition to our award-winning content, we provide data-driven rankings of nursing schools and programs nationwide, reviewed and vetted by registered nurses and healthcare experts. Our nurse-led rankings help students find the best nursing education options based on real nurse insights and verified data.
Copyright © 2026 Full Beaker, Inc
Get exclusive access to the latest nursing news, career insights, and resources to help you take the next step in your journey.

source

Scroll to Top