Google adds mental health tools to Gemini chatbot after lawsuit – The Mercury News

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Today's e-Edition
Get Morning Report and other email newsletters

Get Morning Report and other email newsletters
Today's e-Edition
Trending:
By Mark Bergen, Bloomberg
Alphabet Inc.’s Google plans to introduce new mental health support features for its Gemini chatbot as the company and rivals, like OpenAI, have faced several lawsuits accusing their artificial intelligence tools of leading to harm.
Gemini will add an interface directing chatbot users to a support hotline when the conversation indicates “a potential crisis related to suicide or self-harm,” Google said in a blog post on Tuesday. Additionally, the company is adding a “help is available” module for chats about mental health and design tweaks to discourage self-harm.
The rapid explosion of tools like Gemini and ChatGPT have led to some users forming obsessive relationships with AI bots, allegedly contributing to delusions and, in extreme cases, murder-suicides. Several families have sued leading AI developers over the issue. Congress has looked into potential threats chatbots pose to children and teenagers.
In March, the family of a deceased 36-year-old man in Florida sued Google, claiming that his use of Gemini culminated in a “four-day descent into violent missions and coached suicide.” At the time, Google said the chatbot referred the man to a crisis hotline many times but promised to improve the tool’s safeguards.
In other instances, chatbot users have said that AI tools convinced them to act on clear falsehoods. In the Tuesday blog post, Google said it has trained Gemini “not to agree with or reinforce false believes, and instead gently distinguish subjective experience from objective fact.” The company didn’t provide further details on this process.
In the past, Google has made similar adjustments to its popular services after facing scrutiny, adding information from health institutions and professionals to its search engine and YouTube.
Google also said on Tuesday it was donating $30 million to global crisis support services over the next three years.
More stories like this are available on bloomberg.com
©2026 Bloomberg L.P.
Copyright 2026 The Mercury News. All rights reserved. The use of any content on this website for the purpose of training artificial intelligence systems, algorithms, machine learning models, text and data mining, or similar use is strictly prohibited without explicit written consent.

source

Scroll to Top