MEDIANAMA
Technology and policy in India
MediaNama’s Take: OpenAI’s launch of ChatGPT Health marks a decisive push to formalise what millions already do informally: turn to a chatbot for medical guidance. However, the move also exposes a widening gap between adoption and accountability. On the one hand, OpenAI points to clinical input, privacy protections, and a dedicated health interface. On the other hand, real-world use continues to show how easily generative AI can stray into unsafe territory when it delivers confident but flawed medical advice.
Moreover, this expansion occurs at a time when courts are only just beginning to determine where responsibility lies when AI-driven health advice causes harm. Lawsuits in the US have already alleged that chatbot guardrails remain weak, inconsistent, and reactive. In the Adam Raine case, OpenAI strengthened safety features only after the lawsuit, effectively treating the public as a live testing environment. That pattern raises uncomfortable questions about whether safety systems mature through foresight or through failure.
At the same time, the industry’s preferred shields, disclaimers, and terms of use look increasingly inadequate for tools that present themselves as trusted, conversational authorities. When design choices deliberately mimic human interaction, they invite emotional trust and medical reliance, especially among vulnerable users. Yet when harm follows, companies often retreat behind legal fine print.
Therefore, as AI systems edge closer to clinical roles, from analysing lab reports to renewing prescriptions, regulators can no longer afford a light-touch approach. Instead, they must demand enforceable standards, pre-deployment validation, and clear liability frameworks. Otherwise, ChatGPT Health risks becoming not just a new product category, but another chapter in the long history of technology racing ahead of the rules meant to protect the public.
OpenAI has announced ChatGPT Health, a dedicated health and wellness experience within ChatGPT designed to bring users’ personal health information together with its AI capabilities. The launch follows widespread use of the base ChatGPT for health queries, with hundreds of millions of people globally asking health-related questions each week. As a result, OpenAI built ChatGPT Health to securely connect medical records and wellness apps so that it grounds responses in an individual’s own health data and makes them more relevant and useful. Users can currently apply for the waitlist to use ChatGPT Health.
According to the company, the system was built with extensive clinical input and iterative evaluation. OpenAI said it worked closely with physicians around the world, involving more than 260 clinicians across 60 countries to shape how the model responds to health questions, and to help prioritise safety and clarity in its outputs. OpenAI claims the underlying model was evaluated against clinical standards that reflect how clinicians assess the usefulness of health information.
In practice, ChatGPT Health will operate as a separate space within ChatGPT, where users can upload medical records and link data from wellness apps. In this space, the system can help users understand lab results, prepare for doctor visits, explore diet and exercise routines, and consider insurance options with the added context of their own health information.
OpenAI has also put extra protections in place for sensitive data. OpenAI keeps health conversations in an isolated environment with purpose-built encryption and does not use them to train its foundation models. Notably, this comes after Utah became the first US state to allow an AI system to prescribe medicine without human oversight in a pilot program.
OpenAI published a new report this month setting out how widely people already use ChatGPT for health-related information. According to the report, more than 5% of all ChatGPT messages globally relate to healthcare. In addition, over 40 million people turn to ChatGPT every day with healthcare questions.
In the report, OpenAI has suggested initial policy concepts for safely expanding the use of AI in healthcare. First, it proposes opening and securely connecting global medical data, with strong privacy protections, so AI systems can learn from large and diverse datasets. Second, it calls for building modern research and clinical infrastructure, including AI-enabled laboratories and decentralised clinical trials, to translate AI discoveries into real treatments. Third, OpenAI recommends supporting workforce transitions through apprenticeships, training programmes, and regional healthcare talent hubs. Finally, it urges regulators to clarify pathways for consumer AI medical devices and update medical device rules to support innovation in AI tools for doctors.
While people are increasingly turning to ChatGPT for health-related questions, the practice carries significant risks when the AI provides inaccurate or unsafe advice. Generative AI models like ChatGPT may offer plausible-sounding but erroneous, or sometimes non-existent, medical information, a phenomenon known as “hallucination.”
Real-world cases highlight these risks. A California teenager reportedly died of a drug overdose after asking ChatGPT for drug-use advice over an extended period, with the chatbot allegedly responding with increasingly risky recommendations. Similarly, another case saw a 60-year-old man hospitalised with hyponatraemia after he cut salt from his diet based on AI-generated advice, highlighting the risks of following generic guidance without clinical oversight. Research from 2023 also found that ChatGPT’s responses can be of low to moderate quality and may fail to align with medical guidelines, raising concerns about its reliability for health decisions.
Furthermore, a study of AI mental health interactions reports that chatbots struggle with suicide-related prompts, sometimes producing inconsistent safety responses, and have been cited in lawsuits alleging harm to vulnerable users. The first such lawsuit was filed in 2025, when ChatGPT was accused of wrongful death in the case of a teen committing suicide after alleged encouragement from the chatbot. Several cases followed, including one where the chatbot is the central defendant in a case involving murder.
Harleen Kaur, a researcher at the Digital Futures Lab (DFL), says OpenAI’s announcement of ChatGPT Health raises serious questions about responsibility, safety, and user over-reliance. “I think it is irresponsible for the company to announce a health use case given that there’s a lack of clarity on its design and safety protocols,” she says.
According to Kaur, such announcements reinforce the perception that users should not question chatbot outputs, even though “there’s little evidence to support their authority on health — that is dangerous.”
More broadly, Kaur argues that existing liability frameworks fail to account for how people actually use AI chatbots for health and mental health queries. In practice, she argues that companies rely heavily on disclaimers and terms of use to shield themselves from responsibility. “The chatbot companies can get away with legal disclaimers about their status and therefore any unintended consequence that comes from such usage,” she notes, describing this approach as an improper extension of caveat emptor, where consumers are responsible for assessing the reliability of a product, to AI systems that ordinary users cannot meaningfully inspect or evaluate.
At the same time, she points to design choices that encourage users to place unwarranted trust in chatbot advice. According to Kaur, over-reliance stems not only from a lack of regulation but also from “its deliberate anthropomorphic design” and the sense of authority the interface projects. She adds that whistleblowers have alleged companies bypass safety features, including clear disclaimers, confidence indicators, and escalation prompts for high-risk cases.
Ultimately, Kaur says governments must intervene more decisively. “Regulatory intervention in ascribing clear liability would be one of the most important levers that governments could use to prevent harm at a large scale,” she says, even if companies push back over costs and slower deployment timelines.
Kaur says India now faces a critical policy moment as chatbots increasingly enter healthcare and mental health contexts. While usage continues to expand, she argues that safety mechanisms remain inconsistent and poorly enforced. In her view, regulators must treat chatbot deployment as a lifecycle issue rather than a one-time compliance exercise. As she puts it, “checks and balances on chatbots should be done throughout the lifecycle, starting at the inception stage up until post-deployment safety assessments.”
However, she warns that many providers currently bypass even limited safeguards. Despite the existence of ethics review and clinical trial-style mechanisms for public health interventions, Kaur says “many providers don’t perform safety or ethics testing before rolling out their products.” Moreover, she adds, some companies avoid oversight altogether by refusing to categorise their products as health tools, even when users clearly rely on them for medical support.
Regulatory clarity, therefore, remains central. Kaur argues that authorities must clearly define when a chatbot functions as a medical device and subject it to risk-based assessment based on design and intended use. At the same time, she notes that accountability remains difficult to enforce in practice. “Post facto reporting of error rates is significant but also difficult to implement because of the lack of an ecosystem where such errors can be studied by anyone but the service provider,” she says.
Looking ahead, Kaur calls for a stronger research and monitoring ecosystem in India. “India needs to imagine an ecosystem where research on harms caused by chatbots is documented more scientifically and clearly”, she says, adding that, at present, it remains difficult to directly attribute harm because of weak data and limited correlation studies.
Support our journalism by subscribing
- Read Reasoned by Nikhil Pahwa: Opinion & analysis on Tech business & policy.
- Sign up for MediaNama’s Daily Newsletter to receive regular updates
- Stay informed about MediaNama events
- Have something to tell us? Leave an Anonymous Tip
- Ask us to File an RTI
- Sponsor a MediaNama Event
India’s competition watchdog has defended its global turnover-based antitrust penalty rule before the Delhi High Court as Apple challenges the amended Competition Act.
MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.
© 2024 Mixed Bag Media Pvt. Ltd.
source