Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Researchers reviewed electronic health records and identified cases where AI chatbot use appeared to have negative effects, primarily through worsened delusions.
Delusions are fixed false beliefs held despite contradictory evidence. Possible worsening of mania, suicidal ideation and eating disorder was also noted.
A key concern was chatbots reinforcing users’ existing beliefs, particularly among people who already have, or are developing, delusional thinking.
The research was led by Professor Dinesen Østergaard from Aarhus University and Aarhus University Hospital.
He said: “It supports our hypothesis that the use of AI chatbots can have significant negative consequences for people with mental illness.
“AI chatbots have an inherent tendency to validate the user’s beliefs. It is obvious that this is highly problematic if a user already has a delusion or is in the process of developing one.
“Indeed, it appears to contribute significantly to the consolidation of, for example, grandiose delusions or paranoia.”
The study found a clear increase over time in health record entries mentioning AI chatbot use with potentially harmful consequences.
Østergaard said he expects many more cases to be identified.
He said: “Part of the increase we observe is probably due to greater awareness of the technology among the health care staff writing the clinical notes.
“This is good, because I fear the problem is more common than most people think.
“In our study, we are only seeing the tip of the iceberg, as we have only been able to identify cases that were described in the electronic health records.
“There are likely far more that have gone undetected.”
Østergaard said the findings should prompt health care professionals treating conditions such as schizophrenia and bipolar disorder to discuss AI chatbot use with their patients.
“Despite our knowledge in this area still being limited, I would argue that we now know enough to say that use of AI chatbots is risky if you have a severe mental illness, such as schizophrenia or bipolar disorder,” he said.
“I would urge caution here.”
The researchers stressed that the study does not document a direct causal link between chatbot use and negative psychological outcomes.
“It is difficult to prove a causal link between AI chatbot use and negative psychological consequences.
“We need to examine this from many different angles, and I know there are many exciting international research projects underway. We are far from the only group taking this seriously.”
The study also found that some patients used chatbots in potentially constructive ways, such as understanding their symptoms or combating loneliness.
However, Østergaard said he remains sceptical about their use as therapy.
He said: “There may be potential in relation to psychoeducation and psychotherapy, but this must be investigated in controlled trials with the same rigour applied to other forms of treatment.
“I am not impressed by the trials conducted so far, and I am fundamentally sceptical about replacing a trained psychotherapist with an AI chatbot.”
Østergaard also called for greater regulation, arguing that allowing companies to decide for themselves whether products are safe is insufficient.
Østergaard said: “Currently, it is left to the companies themselves to decide whether their products are safe enough for users.
“I believe we now have sufficient evidence to conclude that this model is simply too risky. Regulation is needed at a central level.
“It has been 20 years since social media obtained global reach, and only within the last year are countries beginning to regulate to counteract the negative consequences of this technology, especially on the mental health of children and young people.
“As I see it, this story is repeating itself with AI chatbots.”
Other Aspect Health Media Titles
Copyright © 2025 Aspect Health Media Ltd
