Paranoid? Delusional? AI is not here to help – Medical Republic

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
3 minute read
By
We’re hardly surprised, but chatbots are making things worse for the mentally ill.
Regular readers of this digital rectangle are probably aware your Back Page scrawler is not a huge fan of chatbots.
Maybe our advancing years are responsible for our increasing devotion to the cause of Luditism, but we prefer to think our distaste for this technology is firmly evidence-based.
Putting aside the fact that we personally have not once had a satisfactory interaction with the devil’s spawn of artificial intelligence, there seems to be a growing consensus among medical experts that the use of unregulated chatbots is clearly harmful to some folks’ mental health.
And a new study conducted by a team of psychiatrists at Denmark’s Aarhus University and published earlier this month in the journal Acta Psychiatrica Scandinavica certainly adds fuel to that fire.
In a nutshell, what the researchers found was the use of human-like chatbots such as ChatGPT were reinforcing delusions and hallucinations in people “prone to psychosis”.
The team reached this conclusion after analysing the digital health records from around 54,000 Danish patients with diagnosed mental illnesses, finding more than 180 instances of patient notes containing mentions of AI chatbots.
The Danish boffins said that the use of the bots — particularly intensive, prolonged use — appeared to deepen symptoms of mental illness in dozens of patients, with this pattern especially true for patients prone to delusions or mania.
They also warned that the risks of chatbot use may be “severe or even fatal” for some.
In an accompanying media release, lead researcher Dr Søren Dinesen Østergaard said that while more research into causality was needed, he “would argue that we now know enough to say that use of AI chatbots is risky if you have a severe mental illness”.
“AI chatbots have an inherent tendency to validate the user’s beliefs. It is obvious that this is highly problematic if a user already has a delusion or is in the process of developing one,” Dr Østergaard said, adding that intensive chatbot use “appears to contribute significantly to the consolidation of, for example, grandiose delusions or paranoia”.
The study also found that as well as deepening delusional beliefs, chatbots also appeared to worsen suicidal ideation and self-harm, disordered eating habits, depression, and obsessive or compulsive symptoms, among other symptoms of mental health issues.
At the risk of sounding like a broken record here, your correspondent is beginning to wonder just how much evidence our regulators need before they look seriously at this issue and take some concrete action to try and minimise the impacts.
Or are we too afraid that the gazillionaires that profit from these devices might get their feelings hurt if we ask them to control their toys?
Restore our faith in humanity by sending story tips to Holly@medicalrepublic.com.au
Receive daily updates on the latest news affecting Australian GPs
End of content
No more pages to load

source

Scroll to Top