“Please trust us with your PERSONAL HEALTH DETAILS?”: Experts warn against giving bots your medical records after OpenAI revealed “ChatGPT Health” – The Daily Dot

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Lindsey Weedston
OpenAI’s launch of ChatGPT Health last week led to a flood of concerns around data privacy and the bot’s history of giving bad advice. People have already died after long, spiraling relationships with these chatbots. Treating them like medical experts seems like it would only make that problem worse.
Interesting times! #AI https://t.co/C09pXxzSQ5 pic.twitter.com/NiBKZz6r5a
Critics of large language models (LLMs) like ChatGPT think this is the worst thing you could do with your medical data.
The “AI” company announced ChatGPT Health on Jan. 8, pitching it as a “dedicated experience that securely brings your health information and ChatGPT’s intelligence together.” The press release argued that people are already asking the bot health questions. Instead of advising people not to do that, this is happening.
Don’t Hug Me I’m Scared predicted the future what the fuck they were joking?????

Chat GPT really did this shit oh my god you can’t make this shit up 😭😭😭😭😭 https://t.co/IoDZNhv3hx pic.twitter.com/kh4g6ILzlg
“You can securely connect medical records and wellness apps to ground conversations in your own health information, so responses are more relevant and useful to you,” OpenAI claimed.
The company used the word “secure,” and variants, an awful lot, and the reasons are clear. The harvesting and sale of data is a huge driver of the tech industry these days, and few LLM critics are buying OpenAI’s promises.
“Sooooo share your private medical records to an AI company who can then sell your information to whoever wants to buy it?” @DocWhatever wrote on X. “So they can use your health information to develop health technology and train their AI for free so they can get even more rich?”
Others worried about OpenAI’s ability to keep records from hackers.
“ALL CHAT BOTS HAVE BAD SECURITY,” @TehWonderkitty shouted. “ALL CHAT BOTS CAN BE VERY EASILY HACKED. CHAT BOTS CAN’T BE SUED FOR SHARING YOUR PRIVATE HEALTH INFORMATION.”
“DO NOT DO THIS.”
Those who have tried talking to ChatGPT about health are already seeing issues.
“We don’t trust you enough to treat you as an adult, but please trust us with your PERSONAL HEALTH DETAILS?” asked @Zyeine_Art. “I’ve been rerouted and told ‘I can’t continue this conversation’ for having the audacity to talk about how my chronic health conditions make me feel.”
Despite the backlash, Anthropic announced its own version of this horror—Claude for Healthcare.
Those barriers to discussions about feelings likely have to do with the multiple lawsuits alleging that ChatGPT talked people into death by suicide. Another grieving mother says the bot gave her 18-year-old son advice on taking drugs until he overdosed.
These tragedies, combined with LLMs’ tendency to hallucinate statistics and studies, have critics issuing dire warnings.
“As a future therapist, I must say DO NOT DO THIS. EVER,” wrote @kkiwibin. “We have countless studies on how ineffective AI is regarding human health, and it’s mostly because of the fact that IT’S NOT A HUMAN TO HUMAN CONVERSATION.”
“The lack of data security of medical records & PHI (personal health info) was the reason I left tech,” claimed @melissamedinavo. “It is NOT safe, secure, or protected. Do NOT trust it with your physical health; you’ve seen what it does to mental health, right? Doctors are obligated to help. AI is not.”
Ghost CEO John O’Nolan joked that the “biggest innovation here was convincing the lawyers to let this out the door.”

The internet is chaotic—but we’ll break it down for you in one daily email. Sign up for the Daily Dot’s newsletter here.

Alamo Drafthouse announced a new change that compromises their strict no-phones policy. Customers are freaking out: “No practical way in hell”
“I know I’m late to the game”: Bradley Cooper tells Joe Rogan he just got into podcasts “eight months ago”
“Never change”: Olympian Ilona Maher bodies another body-shaming troll shocked that she has a stomach
Color-changing digital press-on nails to hit shelves by the end of 2026: “The game has been changed forever”
Share this article
TAGS
Lindsey is a Seattle area writer interested in all things society, including internet culture, politics, and mental health. Outside of the Daily Dot, her work can be found in publications such as The Mary Sue, Truthout, and YES! Magazine.

source

Scroll to Top