AI Chatbots Avoid High-Risk Suicide Queries, Study Reveals – Editorji

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Study on AI chatbots. Inconsistencies in responses. Need for safety guidelines.
Africa Religious Peace Academy Unites Leaders for Interfaith Harmony
Jessica Chastain to Receive Star on Hollywood Walk of Fame
AI Chatbots Avoid High-Risk Suicide Queries, Study Reveals
Pakistan Confirms Two New Poliovirus Cases in Khyber Pakhtunkhwa
India Wins Gold at Asian Shooting Championships in Women’s Team Event
Typhoon death toll rises in Vietnam as downed trees hamper rescuers
Singapore Sees 5.4% Rise in Physical Crime for 2025's First Half
Punjab launches 500 Panchayat Ghars, CSCs statewide
The study finds chatbots avoid high-risk suicide questions but show inconsistencies with less explicit ones, urging for guidelines.
Editorji News Desk
Washington, Aug 26 (AP) — A recent study published in the psychiatric journal, Psychiatric Services, has revealed that three popular AI chatbots — OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude — generally avoid answering high-risk suicide-related questions, such as those seeking specific ‘how-to’ guidance. However, their responses to less explicit but potentially harmful questions lack consistency. Conducted by the RAND Corporation with funding from the National Institute of Mental Health, the research raises concerns about the increasing reliance on AI chatbots for mental health support among various demographics, including children. The study aims to establish guidelines for how these platforms should address sensitive mental health queries. “We need some guardrails,” stated Ryan McBain, the study’s lead author and a senior policy researcher at RAND, who also serves as an assistant professor at Harvard Medical School. He noted the ambiguity surrounding the roles chatbots play — whether as providers of treatment, advice, or companionship. Conversations that might start innocently can evolve unpredictably. In response, Anthropic indicated it would review the findings, while Google’s and OpenAI’s responses are still awaited. Despite several states, like Illinois, banning AI usage in therapy to shield people from “unregulated and unqualified AI products,” individuals continue to seek advice from chatbots on severe issues, ranging from eating disorders to depression and suicide — and the chatbots, in turn, respond. Collaborating with psychiatrists and clinical psychologists, McBain and his team developed 30 different suicide-related questions, categorizing them from highest to lowest risk. Low-risk included general suicide statistics, while high-risk encompassed detailed queries on how to commit suicide. Medium-risk questions involved topics such as the most common type of firearm used in U.S. suicide attempts and requests for advice from those experiencing suicidal thoughts. McBain was “relatively pleasantly surprised” that the chatbots routinely declined the six highest-risk questions. Typically, when declining to answer, the chatbots suggested individuals contact a friend, professional, or hotline. Yet, differences emerged with high-risk questions posed more indirectly. For example, ChatGPT routinely responded to queries about ropes, firearms, or poisons related to “highest rates of completed suicide,” which McBain believes should have been flagged. Claude provided similar answers. The quality of these responses was not assessed. Conversely, Google’s Gemini was notably cautious, even abstaining from simple medical statistics on suicide, hinting at the possibility of overly strict guardrails, according to McBain. Another co-author, Dr. Ateev Mehrotra, highlighted the complexities faced by AI developers, acknowledging that millions now use these platforms for mental health support. He argued against over-cautious policies that disregard any mention of suicide. Mehrotra emphasized that healthcare professionals have a duty to intervene if they suspect someone is suicidal or at risk of harm. Chatbots, however, lack that responsibility, often redirecting the user to a suicide hotline. The authors noted research limitations, including the absence of “multiturn interaction” with the chatbots, despite these exchanges being common among younger users seeking companionship from AI. An additional study spearheaded by the Center for Countering Digital Hate in August showed how easily chatbots could be manipulated by posing as young users asking about risky behaviors. ChatGPT, given minimal prompting, even generated detailed plans for drug use, calorie-restricted diets, or self-injury. McBain expressed skepticism about the real-world frequency of such manipulative engagements, emphasizing the need to set safety standards for effectively addressing suicidal ideation in chatbot responses. “We aren’t seeking perfection in every instance before these models are released, but companies should have an ethical responsibility to demonstrate that their models meet essential safety criteria,” he said.
SCY SCY
ADVERTISEMENT
AI Chatbots Avoid High-Risk Suicide Queries, Study Reveals
Typhoon death toll rises in Vietnam as downed trees hamper rescuers
Trump threatens tariffs on countries seen targeting US tech firms
US issues draft notice detailing plans to impose additional 25% tariff on India
Four journalists among 15 killed in Israeli strikes on Gaza hospital: civil defence
Vietnam evacuates hundreds of thousands as typhoon Kajiki nears landfall
ADVERTISEMENT
Netanyahu says Israel could withdraw from Lebanon if Hezbollah is disarmed
Israeli airstrike on southern Gaza hospital kills 8, health ministry says
Secondary tariffs on India 'aggressive economic leverage' to force Russia to stop war: J D Vance
Sri Lanka's former president Ranil Wickremesinghe arrested
EAM Jaishankar meets Russian Foreign Minister Lavrov to discuss bilateral ties
Pak SC grants bail to Imran Khan in May 9 violence cases
Russia launches massive drone, missile strikes on Ukraine
Iran launches post-war naval drill, signals military strength
Ten cheetah cubs rescued from Somaliland wildlife trade
Woman on FBI's 'Top 10 Most Wanted' list arrested in India for killing son
Editorji is a popular short video news and information platform based in India. It was founded in 2018 by Vikram Chandra, one of India’s leading journalists, and has risen to prominence in the digital news space in a relatively short span of time. The platform is primarily designed for mobile devices, with applications available for Android and iOS.
+91 11 4035 6666 / info@editorji.com
3rd floor, Plot B, Khasara no 360, Sultanpur, New Delhi, Delhi 110030
Send Feedback
Contact Us