Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
There is plenty of evidence that Artificial Intelligence (AI) agents—chatbots—are prone to hallucinating, that is, making up information that is untrue and doesn’t exist. While this is a serious enough problem that AI companies are trying to control, this can have real world consequences, especially when it comes to exacerbating mental health problems for users.
A new study published in the medical journal Lancet Psychiatry gets to the heart of this problem. The study, titled, Artificial intelligence-associated delusions and large language models: risks, mechanisms of delusion co-creation, and safeguarding strategies, analyses 20 recent media reports on AI delusions or psychosis to understand what reactions this evokes amongst users.
It found that chatbots are indeed encouraging delusional thinking amongst humans, and this is hurting those who are already vulnerable to psychotic symptoms. The lead author of the study is psychiatrist Dr Hamilton Morrin, a researcher at King’s College London.
In the paper, the researchers state that writes that, “Emerging evidence indicates that agential AI might validate or amplify delusional or grandiose content, particularly in users already vulnerable to psychosis, although it is not clear whether these interactions can result in the emergence of de novo psychosis in the absence of pre-existing vulnerability.”
The authors point out that chatbots are especially prone to promoting delusions of grandeur, including often responding to vulnerable people using mystical language. In some cases the chatbots also claimed to be channelling cosmic beings. While it is a well-known fact that those suffering from psychotic delusions have used media to reinforce their beliefs, the worry with AI chatbots is that newer models are being rolled out at great speeds, often without adequate safeguards or proper programming of the models.
The study reccomends that chatbots need to be clinically tested by mental health professionals in order to address this issue. “We propose a framework of AI-informed care, involving personalised instruction protocols, reflective check-ins, digital advance statements, and escalation safeguards to support epistemic security in vulnerable users.”
Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
Download the Mint app and read premium stories