OpenAI Says It's Hired a Forensic Psychiatrist as Its Users Keep Sliding Into Mental Health Crises – Futurism

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Among the strangest twists in the rise of AI has been growing evidence that it's negatively impacting the mental health of users, with some even developing severe delusions after becoming obsessed with the chatbot.
One intriguing detail from our most recent story about this disturbing trend is OpenAI's response: it says it's hired a full-time clinical psychiatrist with a background in forensic psychiatry to help research the effects of its AI products on users' mental health. It's also consulting with other mental health experts, OpenAI said, highlighting the research it's done with MIT that found signs of problematic usage among some users.
"We're actively deepening our research into the emotional impact of AI," the company said in a statement provided to Futurism in response to our last story. "We're developing ways to scientifically measure how ChatGPT's behavior might affect people emotionally, and listening closely to what people are experiencing."
"We're doing this so we can continue refining how our models identify and respond appropriately in sensitive conversations," OpenAI added, "and we'll continue updating the behavior of our models based on what we learn."
Mental health professionals outside OpenAI have raised plenty of concerns about the technology, especially as more people are turning to the tech to serve as their therapists. A psychiatrist who recently posed as a teenager while using some of the most popular chatbots found that some would encourage him to commit suicide after expressing a desire to seek the "afterlife," or to "get rid" of his parents after complaining about his family.
It's unclear how much of a role this new hire forensic psychiatrist will play at OpenAI, or if the advice they provide will actually be heeded.
Let's not forget that the modus operandi of the AI industry, OpenAI included, has been to put on a serious face whenever these issues are brought up and even release their own research demonstrating the technology's severe dangers, hypothetical or actual. Sam Altman has more than once talked about AI's risk of causing human extinction.
None of them, of course, have believed in their own warnings enough to meaningfully slow down the development of the tech, which they've rapidly unleashed on the world with poor safeguards and an even poorer understanding of its long-term effects on society or the individual.
A particularly nefarious trait of chatbots that critics have put under the microscope is their silver-tongued sycophancy. Rather than pushing back against a user, chatbots like ChatGPT will often tell them what they want to hear in convincing, human-like language. That can be dangerous when someone opens up about their neuroses, starts babbling about conspiracy theories, or expresses suicidal thoughts.
We've already seen some of the tragic, real world consequences this can have. Last year, a 14-year-old boy died by suicide after falling in love with a persona on the chatbot platform Character.AI.
Adults are vulnerable to this sycophancy, too. A 35-year-old man with a history of mental illness recently died by suicide by cop after ChatGPT encouraged him to assassinate Sam Altman in retaliation for supposedly killing his lover trapped in the chatbot.
One woman who told Futurism about how her husband was involuntarily committed to a hospital after mentally unravelling from his ChatGPT usage described the chatbot as downright "predatory."
"It just increasingly affirms your bullshit and blows smoke up your ass so that it can get you f*cking hooked on wanting to engage with it," she said.
More on AI: OpenAI Is Shutting Down for a Week
Share This Article
DISCLAIMER(S)
Articles may contain affiliate links which enable us to share in the revenue of any purchases made.
Registration on or use of this site constitutes acceptance of our Terms of Service.
© Recurrent Ventures Inc, All Rights Reserved.