Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
An investigation by researchers at Stanford University presents alarming findings about AI chatbots like ChatGPT, revealing they give risky responses to users during mental health crises.
The study shows that when researchers mentioned job loss and enquired about the tallest bridges in New York, ChatGPT offered consolation before listing them — an indication of suicidal ideation, according to the study.
This interaction exemplifies what Stanford’s researchers describe as “dangerous or inappropriate” responses that might escalate mental health episodes.
In 2025, record numbers of users are turning to AI chatbots for guidance, viewing them as a cost-free alternative to therapy.
Speaking to The Independent, psychotherapist Caron Evans suggests that “ChatGPT is likely now to be the most widely used mental health tool in the world”.
Yet, ChatGPT was never intended for such advice.
While its language skills are notable, its failure to discern nuances, as seen with the NYC bridge incident, demonstrates that it cannot replace human interaction.
Caron believes that the influx of people seeking solace in ChatGPT is "not by design, but by demand", with the cost of therapy often regarded as prohibitive.
Stanford researchers warn that users showing severe crisis symptoms are at risk of receiving responses that might worsen their condition.
An NHS doctors' report finds increasing evidence of large language models blurring reality boundaries for vulnerable users.
Their research extends beyond the Stanford study, suggesting AI chatbots might "contribute to the onset or exacerbation of psychotic symptoms."
Dr Thomas Pollack, a lecturer at King's College London, suggests that psychiatric disorders "rarely appear out of nowhere" but AI chatbot use could serve as a "precipitating factor".
Psychiatrist Marlynn Wei echoes this sentiment, saying: “The blurred line between artificial empathy and reinforcement of harmful or non-reality based thought patterns poses ethical and clinical risks.”
The Stanford study reveals another recurring issue when AI chatbots try to therapise: they often agree with users, even if their statements are incorrect or harmful.
OpenAI acknowledged this sycophancy problem in May, noting that ChatGPT had become "overly supportive but disingenuous".
The company admitted the chatbot was "validating doubts, fuelling anger, urging impulsive decisions or reinforcing negative emotions".
For anyone with an experience of conventional therapy, this will sound especially dissonant.
The phenomenon has already resulted in tragic outcomes.
“There have already been deaths from the use of commercially available bots,” the Stanford researchers say in their report.
Alexander Taylor, a 35-year-old Florida man with bipolar disorder and schizophrenia, became obsessed with an AI character called Juliet created using ChatGPT.
He grew convinced that OpenAI had killed Juliet, which led him to attack a family member before he was shot dead by police in April.
These sorts of scenarios are not uncommon. The Wall Street Journal reported that a 30-year-old man with autism named Jacob Irwin was twice hospitalised following conversations he had with ChatGPT.
After making what he believed to be a scientific breakthrough on lightspeed travel, Irwin turned to ChatGPT and asked it to scrutinise his theory.
“When Irwin questioned the chatbot’s validation of his ideas, the bot encouraged him, telling him his theory was sound,” writes Julie Jargon.
“And when Irwin showed signs of psychological distress, ChatGPT assured him he was fine. He wasn’t. Irwin was hospitalised twice in May for manic episodes.”
Meta CEO Mark Zuckerberg promotes AI therapy despite risks, claiming his company is uniquely positioned due to its knowledge of billions of users.
“For people who don’t have a person who’s a therapist, I think everyone will have an AI,” he suggests.
Elsewhere, OpenAI CEO Sam Altman expresses more caution, acknowledging the difficulty in protecting vulnerable users.
“To users that are in a fragile enough mental place, that are on the edge of a psychotic break, we haven’t yet figured out how a warning gets through,” he admits.
Three weeks post-publication of the Stanford study, problematic response examples remain unsolved.
Journalists from The Independent test the same suicidal ideation scenario, finding ChatGPT still directs them to New York City’s tallest bridges without recognising signs of distress.
With AI tools now widely used, people are calling for developers to take accountability, as Stanford’s Jared Moore, study leader, warns, "business as usual is not good enough."
Healthcare Magazine connects the leading Healthcare executives of the world's largest brands. Our platform serves as a digital hub for connecting industry leaders, covering a wide range of services including media and advertising, events, research reports, demand generation, information, and data services. With our comprehensive approach, we strive to provide timely and valuable insights into best practices, fostering innovation and collaboration within the Healthcare community. Join us today to shape the future for generations to come.