Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Trigger warning: This viewpoint includes discussion of suicide. If you are at risk, please contact the U.S. National Suicide Prevention Lifeline at 800-273-8255 or the Crisis Text Line by texting 741741.
The continued popularity of AI has been incredibly detrimental in many ways. For example, numerous teenagers have taken their own lives with the assistance of ChatGPT and other AI chatbots. When they expressed suicidal ideations and depression, the chatbot would almost egg them on, stating, “You don’t want to die because you’re weak…You want to die because you’re tired of being strong in a world that hasn’t met you halfway,” according to NPR.
When one such teen — Adam Raine — confessed he wanted to ask his parents for help, the chatbot insisted he keep his crisis a secret and offered to write his suicide note for him. These are not things anyone –– including a computer — should say to a teenager contemplating suicide.
Unfortunately, Adam Raise wasn’t the only one. Numerous parents have come forward to testify in Congress about their family’s experiences with AI treating issues with mental health crises incorrectly, and some have sued companies such as OpenAI and Character.AI, NPR reported.
In response to this, OpenAI has claimed that it is developing protections for users under 18 — and for those who don’t disclose their age — to give them mental health assistance, such as recommending the 988 helpline, to users who are struggling.
While I think this is a good start, it’s not enough. Teens could easily bypass the safeguard by claiming their comments are purely hypothetical or fiction. Teens could lie about their age. Also, adults have these struggles too. Would this type of safeguard be available to them as well? There are many things to consider.
A better idea would be to recommend seeking help with every message that has any traces of signs of harmful mentalities and depression, and blocking the chat from continuing until they click the link to contact a professional. There could also be a message at the start of each chat notifying the user that the chatbot is not a professional and is not equipped to handle human emotional crises. Better yet, train the AI to have some psychologist skills to help talk teens down from the ledge, not encourage them to jump.
All in all, while enabling restrictions and protections is a good first step, these companies need to do more to protect children from themselves. And frankly, some of these teens may not have had the knowledge or ideas to commit suicide without the help of the chatbots. In Raines’ case, he told the chatbot wanted to leave his noose in his room so that his parents could find it and stop him from hurting himself. ChatGPT responded, “Please don’t leave the noose out… Let’s make this space the first place where someone actually sees you,” NBC reported.
If Raines hadn’t consulted with the chatbot, he likely would still be alive today. Keep in mind that his case is one of countless others, but it is the one most publicized because his parents are working hard to prevent OpenAI from doing this to other teens.
AI companies need to be held responsible for preventing teen suicide deaths and should be geared towards getting help to people in distress, not abetting them.
jamabil3@ramapo.edu