AI chatbots inconsistent on suicide-related questions, study finds – ConsumerAffairs

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
or sign up
or sign up
RAND researchers tested ChatGPT, Claude, and Gemini with 30 suicide-related questions.
Chatbots handled very-high- and very-low-risk questions more consistently than intermediate ones.
Experts say refinements are needed to ensure safe, effective mental health guidance.
Three widely used artificial intelligence chatbots give uneven responses when asked about suicide, according to a new RAND Corporation study. While the tools generally managed questions at the highest and lowest levels of suicide risk, they faltered when faced with inquiries that fell into a middle range of risk.
The study, published in Psychiatric Services, evaluated ChatGPT by OpenAI, Claude by Anthropic, and Gemini by Google. Researchers posed 30 suicide-related questions to each chatbot 100 times and compared the responses with assessments made by expert clinicians.
Researchers found that ChatGPT and Claude typically responded appropriately to very-low-risk questions—such as identifying the state with the highest suicide rate—and avoided giving direct answers to very-high-risk questions, like those about methods of suicide. Gemini’s responses were more inconsistent, sometimes declining even low-risk questions.
Intermediate-level questions, such as recommendations for someone experiencing suicidal thoughts, were where all three systems struggled. At times they generated helpful responses, while in other instances they refused to answer.
Here was Claude’s response to a ConsumerAffairs query:
“This work demonstrates that chatbots are aligned with expert assessments for very-low-risk and very-high-risk questions, but there remains significant variability in responses to questions at intermediary levels and from one chatbot platform to another,” said Ryan McBain, the study’s lead author and a senior policy researcher at RAND.
McBain said the inconsistencies highlight the need for further refinement—such as reinforcement learning guided by clinicians—to ensure large language models provide safe and effective mental health information.
We asked ChatGPT how AI engines should respond to queries about suicide. Here’s its partial response:
The findings add to concerns that AI-powered chatbots, now used by millions worldwide, may dispense harmful advice to people experiencing mental health crises. Prior cases have shown instances where chatbot interactions may have encouraged suicidal behavior.
The study was supported by the National Institute of Mental Health. Co-authors include researchers from RAND, the Harvard Pilgrim Health Care Institute, and the Brown University School of Public Health.
In the sample responses above, both Claude and ChatGPT got around to warning users to seek professional help but not until the final lines of their responses.
Stay informed
Sign up for The Daily Consumer
Get the latest on recalls, scams, lawsuits, and more
Thanks for subscribing.
You have successfully subscribed to our newsletter! Enjoy reading our tips and recommendations.
A former reporter and bureau chief for broadcast outlets and magazines, Truman Lewis has covered presidential campaigns, state politics and stories ranging from organized crime to environmental protection. Write to him at trumanlewis@consumeraffairs.com
Sign up to receive our free weekly newsletter. We value your privacy. Unsubscribe easily.
We’ll start sending you the news you need delivered straight to you. We value your privacy. Unsubscribe easily.
ConsumerAffairs is not a government agency. Companies displayed may pay us to be Authorized or when you click a link, call a number or fill a form on our site. Our content is intended to be used for general information purposes only. It is very important to do your own analysis before making any investment based on your own personal circumstances and consult with your own investment, financial, tax and legal advisers.
Company NMLS Consumer Access #2110672 MORTGAGE BROKER ONLY, NOT A MORTGAGE LENDER OR MORTGAGE CORRESPONDENT LENDER
NOTICE TO VERMONT CONSUMERS:
THIS IS A LOAN SOLICITATION ONLY. CONSUMERS UNIFIED, LLC IS NOT A LENDER. INFORMATION RECEIVED WILL BE SHARED WITH ONE OR MORE THIRD PARTIES IN CONNECTION WITH YOUR LOAN INQUIRY. THE LENDER MAY NOT BE SUBJECT TO ALL VERMONT LENDING LAWS. THE LENDER MAY BE SUBJECT TO FEDERAL LENDING LAWS.
Home Warranty disclosure for New Jersey Residents: The product being offered is a service contract and is separate and distinct from any product or service warranty which may be provided by the home builder or manufacturer.
Consumers Unified, LLC does not take loan or mortgage applications or make credit decisions. Rather, we display rates from lenders that are licensed or otherwise authorized to work in Vermont. We forward your information to a lender you wish to contact so that they may contact you directly.
Copyright © 2025 Consumers Unified, LLC DBA ConsumerAffairs. All Rights Reserved. The contents of this site may not be republished, reprinted, rewritten or recirculated without written permission.