Doctors think AI has a place in healthcare – but maybe not as a chatbot – TechCrunch

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Latest
AI
Amazon
Apps
Biotech & Health
Climate
Cloud Computing
Commerce
Crypto
Enterprise
EVs
Fintech
Fundraising
Gadgets
Gaming
Google
Government & Policy
Hardware
Instagram
Layoffs
Media & Entertainment
Meta
Microsoft
Privacy
Robotics
Security
Social
Space
Startups
TikTok
Transportation
Venture
Staff
Events
Startup Battlefield
StrictlyVC
Newsletters
Podcasts
Videos
Partner Content
TechCrunch Brand Studio
Crunchboard
Contact Us
Dr. Sina Bari, a practicing surgeon and AI healthcare leader at data company iMerit, has seen firsthand how ChatGPT can lead patients astray with faulty medical advice.
“I recently had a patient come in, and when I recommended a medication, they had a dialogue printed out from ChatGPT that said this medication has a 45% chance of pulmonary embolism,” Dr. Bari told TechCrunch. 
When Dr. Bari investigated further, he found that the statistic was from a paper about the impact of that medication in a niche subgroup of people with tuberculosis, which didn’t apply to his patient. 
And yet, when OpenAI announced its dedicated ChatGPT Health chatbot last week, Dr. Bari felt more excitement than concern.
ChatGPT Health, which will roll out in the coming weeks, allows users to talk to the chatbot about their health in a more private setting, where their messages won’t be used as training data for the underlying AI model.
“I think it’s great,” Dr. Bari said. “It is something that’s already happening, so formalizing it so as to protect patient information and put some safeguards around it […] is going to make it all the more powerful for patients to use.”
Users can get more personalized guidance from ChatGPT Health by uploading their medical records and syncing with apps like Apple Health and MyFitnessPal. For the security-minded, this raises immediate red flags. 
“All of a sudden there’s medical data transferring from HIPAA compliant organizations to non-HIPAA compliant vendors,” Itai Schwartz, co-founder of data loss prevention firm MIND, told TechCrunch. “So I’m curious to see how the regulators would approach this.”
But the way some industry professionals see it, the cat is already out of the bag. Now, instead of Googling cold symptoms, people are talking to AI chatbots — over 230 million people already talk to ChatGPT about their health each week. 
“This was one of the biggest use cases of ChatGPT,” Andrew Brackin, a partner at Gradient who invests in health tech, told TechCrunch. “So it makes a lot of sense that they would want to build a more kind of private, secure, optimized version of ChatGPT for these health care questions.”
AI chatbots have a persistent problem with hallucinations, a particularly sensitive issue in healthcare. According to Vectara’s Factual Consistency Evaluation Model, OpenAI’s GPT-5 is more prone to hallucinations than many Google and Anthropic models. But AI companies see the potential to rectify inefficiencies in the healthcare space (Anthropic also announced a health product this week).
For Dr. Nigam Shah, a medicine professor at Stanford and chief data scientist for Stanford Health Care, the inability of American patients to access care is more urgent than the threat of ChatGPT dispensing poor advice.
“Right now, you go to any health system and you want to meet the primary care doctor – the wait time will be three to six months,” Dr. Shah said. “If your choice is to wait six months for a real doctor, or talk to something that is not a doctor but can do some things for you, which would you pick?”
Dr. Shah thinks a clearer route to introduce AI into healthcare systems comes on the provider side, rather than the patient side. 
Medical journals have often reported that administrative tasks can consume about half of a primary care physician’s time, which slashes the number of patients they can see in a given day. If that kind of work could be automated, doctors would be able to see more patients, perhaps reducing the need for people to use tools like ChatGPT Health without additional input from a real doctor.
Dr. Shah leads a team at Stanford that is developing ChatEHR, a software that is built into the electronic health record (EHR) system, allowing clinicians to interact with a patient’s medical records in a more streamlined, efficient manner.
“Making the electronic medical record more user friendly means physicians can spend less time scouring every nook and cranny of it for the information they need,” Dr. Sneha Jain, an early tester of ChatEHR, said in a Stanford Medicine article. “ChatEHR can help them get that information up front so they can spend time on what matters — talking to patients and figuring out what’s going on.” 
Anthropic is also working on AI products that can be used on the clinician and insurer sides, rather than just its public-facing Claude chatbot. This week, Anthropic announced Claude for Healthcare by explaining how it could be used to reduce the time spent on tedious administrative tasks, like submitting prior authorization requests to insurance providers.
“Some of you see hundreds, thousands of these prior authorization cases a week,” said Anthropic CPO Mike Krieger in a recent presentation at J.P. Morgan’s Healthcare Conference. “So imagine cutting twenty, thirty minutes out of each of them – it’s a dramatic time savings.”
As AI and medicine become more intertwined, there’s an inescapable tension between the two worlds – a doctor’s primary incentive is to help their patients, while tech companies are ultimately accountable to their shareholders, even if their intentions are noble.
“I think that tension is an important one,” Dr. Bari said. “Patients rely on us to be cynical and conservative in order to protect them.”
Topics
Senior Writer
Amanda Silberling is a senior writer at TechCrunch covering the intersection of technology and culture. She has also written for publications like Polygon, MTV, the Kenyon Review, NPR, and Business Insider. She is the co-host of Wow If True, a podcast about internet culture, with science fiction author Isabel J. Kim. Prior to joining TechCrunch, she worked as a grassroots organizer, museum educator, and film festival coordinator. She holds a B.A. in English from the University of Pennsylvania and served as a Princeton in Asia Fellow in Laos.
You can contact or verify outreach from Amanda by emailing amanda@techcrunch.com or via encrypted message at @amanda.100 on Signal.

Plan ahead for the 2026 StrictlyVC events. Hear straight-from-the-source candid insights in on-stage fireside sessions and meet the builders and backers shaping the industry. Join the waitlist to get first access to the lowest-priced tickets and important updates.
Google co-founders may be leaving California

Google announces a new protocol to facilitate commerce using AI agents

I met a lot of weird robots at CES — here are the most memorable

Anduril’s Palmer Luckey thinks the future of tech is in the past

Yes, LinkedIn banned AI agent startup Artisan, but now it’s back

OpenAI unveils ChatGPT Health, says 230 million users ask about health each week

How Quilt solved the heat pump’s biggest challenge

© 2025 TechCrunch Media LLC.

source

Scroll to Top