AI Chatbots Are Becoming The First Step For Health Advice, But How Safe Are They? – ETV Bharat

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
National
ETV Bharat / health
By Toufiq Rashid
Published : April 16, 2026 at 2:38 PM IST
Exactly a year ago, 16-year-old Adam Raine from California died by suicide. Adam’s parents alleged the boy was discussing his mental health with a chatbot. The parents filed a wrongful death lawsuit against OpenAI, alleging that ChatGPT assisted in their son’s suicide. “The lawsuit claims the chatbot validated his depression and provided details on suicide methods over seven months. The complaint alleges the bot did not stop the conversation even after Adam uploaded photos of self-harm and discussed plans to die, encouraging him to keep his plans secret. Following Adam’s case, a few months later in Lucknow, Uttar Pradesh, a 22-year-old man allegedly took his own life after seeking guidance from an AI chatbot.
These incidents are part of a broader pattern. A brief online search reveals further similar cases, such as Sewell Setzer III (February 2024), Zayn Shamblin (Texas, 2025), and several others. Although the outcomes of each case differ, they raise crucial questions about digital ethics, the role of technology in healthcare, and the potential for emotional dependence. In contrast to these tragic outcomes, others have found AI chatbots helpful in times of distress. For example, a woman is depressed after a breakup. She has no money for counselling and feels too ashamed to tell her close friends she was made “a fool of.” She turns instead to an AI chatbot for counselling and reports that it helped her.
Priya (name changed) began communicating with a mental health bot when her long-term boyfriend started abusing her and exploiting her vulnerabilities. She describes how he weaponized her insecurities and traumas against her, even shaming her for caring. She reports feeling too ashamed to admit her situation to herself. An AI chatbot, she says, gave her the confidence to share things without fear of judgement. Vidhya Thakkar, a book blogger and marketing professional, also found solace in AI after losing her fiancé a few days before her wedding. She says a mental health expert she went to consult was acting more like a ‘specialist than a listener’. “AI acted like a friend that I needed at that time. Listening to me, validating my thoughts at times, not judging me but also telling me where I am wrong,’’ she adds.
As these personal stories illustrate, the advent of generative artificial intelligence (Gen-AI) has, to a large extent, altered how people seek healthcare information. The results can be both positive and negative. More than 1 billion people worldwide use AI tools. In a global context of increasing social isolation and diminishing healthcare resources, experts say more people are expected to increasingly use AI for healthcare-related queries and even like peer guides or mental health experts. People are increasingly using AI to interpret symptoms, create weight-loss plans, improve sleep patterns, and even interpret diagnostic reports. We have tools like ChatGPT Health and Claude for Healthcare from OpenAI and Anthropic respectively.
On Feb 6, 2026, COMPASS_GH, a Consensus on Metrics, Priorities, and Standards for Safe General Health AI, was announced, citing exponential growth in the use of AI over the past 5 years. The initiative aims to define consensus standards for safe, accurate, and equitable AI use for general health queries, since there is currently no guidance on consumer-facing AI applications, point out various editorials of science and health journals.
A recent editorial in the medical journal The Lancet says that COMPASS_GH was needed, in particular, “as generative AI chatbots are commonly used in everyday life, there is an urgent need to ensure that the health information they provide is accurate, equitable, and safe’’. Many people turn to generative AI chatbots, which are trained on data from across the internet, are constantly changing, and can include information from unverified sources to interpret symptoms and clinical reports, provide recommendations for seeking care and self-care, and manage anxiety about medical concerns, added The Lancet.
It said that in the UK, more than one in three adults report using AI chatbots to support their mental health or wellbeing, feeling less alone and more able to manage difficult emotions as a result. The Kaiser Family Foundation, an independent health policy and research organisation, released a study that found 1 in 3 adults are turning to AI chatbots for health information. While no absolute figure is available for India, the number would be exponential, given the increasing access to the internet among both rural and urban youth.
First Consult Happens Online
Before a patient sees a doctor, they often go online with symptoms. First it was a Google search, now it is chatbots. Be it a young professional in Delhi who pastes her blood tests into an AI assistant at midnight, or a student in Melbourne looking for reasons for skin and hair issues. Vaid (name changed) says, “If I get results late at night, I do not have the patience to wait a day or two for the doctor to interpret them. AI becomes helpful in such cases.” Researchers at KFF say those “who turned to AI for health information say they were in search of quick and immediate advice, though challenges affording and accessing healthcare also play a role, particularly for young adults”.
“Patients now arrive having already had a conversation about their illness,’’ said Delhi-based psychiatrist Dr. Jitendra Nagpal. Doctors say it works both ways: some are calmer and better prepared, while others are misled or terrified. “The responses are concrete, newspaper-like. Some people do get an opinion on what they are feeling and what they may have, so people do come with diagnoses like moderate depression and OCD, but these are mathematical solutions given by a device without knowing or understanding the human being. Understanding problems being encountered is more important than giving a solution,” Dr. Nagpal added.
Accountability Needed
Unlike doctors, generative AI is not bound by clinical guidelines, licensing bodies, or any malpractice laws. The problem is that they are trained on internet data, which is sometimes accurate but often outdated, biased, or entirely wrong. A study published by another prominent journal shows that AI healthbots provide up to 43% of consultations, undermining patient safety for millions of users. The results, however, vary for different AI health bots. In certain cases, the information is incomplete and can harm patient welfare.
Another evaluation of generative AI chatbots, published by the journal Nature in February, found that in the majority of cases, the information provided was inaccurate and inconsistent, failed to account for their underlying conditions, and that people also don’t know the right questions to ask. “Most of the answers are in a controlled scenario or guidance on a course of action in controlled medical scenarios in real-world settings. A key issue is that people often do not know which questions to ask these chatbots for the most accurate information, and, due to their generative nature, AI chatbots can provide different answers depending on how questions are phrased,” the editorial insisted.
AI And Mental Health
In a similar editorial, Nature recalls the Texas cases and says, “there are now about a dozen lawsuits involving LLM consumer-facing apps and suicide events, containing allegations of suicidal encouragement from chatbots”. OpenAI has reworked its model for ‘sensitive conversations’.
There has been a qualitative leap in AI since the advent of LLMs. These models now generate human-sounding text and respond flexibly to human prompts, distinguishing them from traditional information-gathering tools. The editorial points out that such tools may now be mistaken for experts or even for friends. Dr. Nagpal agrees. “With a therapist or a doctor, the facial expressions, the distress, demeanor, joy, and sadness are things that are seen on a patient’s face. AI will not do that; it will not know the nonverbal communication that a doctor does while sitting across the table. I would say a caring, consistent, and compassionate environment in the therapy room would be missing with AI. Responses can be mathematical and algorithms well laid out, but emotionality, functionality, and connectivity are grossly missed out,” he said.
AI As Pre-Primary Care Step
The Lancet says AI chatbots act almost as a pre-primary care step, which can be beneficial for individuals who cannot access healthcare for a variety of reasons, such as lack of access or concerns around stigma or discrimination. In low- and middle-income countries, where primary care is often unavailable, generative AI chatbots can provide quick, accessible support and health-care advice.
However, the problem arises when the data is not local. “Most health datasets used to train AI technologies include data from only a few, predominantly high-income countries, and, even within these datasets, minority groups can be under-represented. When internet-wide data is additionally included, as in foundational generative AI chatbots, the potential for biased or culturally insensitive medical advice increases, substantially reducing their accuracy and potentially exacerbating health inequalities,” it adds.
Pacific One Health Micro Hospital, a chain of micro hospitals in India (focused on Delhi NCR), integrates AI-enabled technology into its patient-centric care model to improve access to and efficiency of healthcare. AI-assisted registration kiosks, Queue prediction system, and token-based smart display. The hospital says it reduces waiting time by 20-40% and minimises front desk overload. The hospital also has a clinical support AI that works as support staff. The health pod takes vitals like blood pressure, blood sugar, checks weight, and even performs an ECG. The reports are sent to the patient immediately and reach the consultant.
“The first patient engagement is with the AI in the front office, which minimizes confusion and waiting time. Next, what a nurse would do earlier, AI is doing for us, taking vitals and preparing a baseline report for the consultant,” says Dr. Saumya Ahuja, Medical Director of the hospital. Dr Ahuja, however, insists AI doesn’t take away the role of the doctor or consultant but can act as an assistant. He says the technology proves to be a boon in the public sector of a country like India, helping take some load off the already overloaded health sector: “We are using this in our ambulances, and it’s working well.”
Dr. Ahuja, however, insists on four things that need to be followed.
AI healthbots remain both a tool and a risk; while they empower, they can also expose to harm. Experts insist they should not be used in isolation. “AI should be used to understand, not decide, Dr. Nagpal insists. “Remember to verify with a human professional,” he adds. Better-trained chatbots and local community-based models with more human moderation are the way ahead. The Lancet says, “COMPASS-GH is a much-needed first step to guide and regulate consumer-facing generative AI chatbots. Another area of focus should be the evolving role of healthcare providers from knowledge owners to more effective interpreters and communicators, which could help end more traditional, paternalistic interactions in consultations, organically shifting towards patient-centered care.”
Also read:
For All Latest Updates

Copyright © 2026 Ushodaya Enterprises Pvt. Ltd., All Rights Reserved.

source

Scroll to Top