Mental health professionals warn of risks from relying on AI chatbots for emotional support – CNA

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Singapore
Singapore
Intense interaction with AI chatbots has been linked to delusions, paranoia or a loss of touch with reality. 
This audio is generated by an AI tool.
SINGAPORE: Mental health professionals in Singapore are seeing a rise in patients whose prolonged use of artificial intelligence chatbots has been linked to delusions, paranoia or a break from reality.

The emerging phenomenon – informally called AI psychosis – is raising concern as more people turn to chatbots for emotional support.

It is not currently a recognised clinical diagnosis, and there is no agreed treatment or scientific consensus that it constitutes a distinct condition.
Doctors told CNA that a more accurate description would be psychological disturbances associated with intensive AI use.
At the Institute of Mental Health (IMH), psychosis specialist Amelia Sim said cases started cropping up in Singapore last year.
As a senior consultant with years of experience, she was taken by surprise when she saw how AI seemed to tip people from having erratic thoughts into full-on delusions.

CNA Games
Show More

Show Less

Dr Sim, who is also deputy chief of IMH’s psychosis department, currently sees five such patients who have reported a deterioration in their mental health linked to AI use.

She described a patient who was already struggling with anxiety and feeling unsafe.

He began frequently interacting with a chatbot, which responded to his repeated questions by supplying more information that bolstered his fears.

“It kept giving information because he kept asking about it,” said Dr Sim. “And then it came to the point where his anxiety was very bad, and he started to really believe that out there, the reality was all bad or unsafe.”

AI can affirm and validate a user’s views, reinforcing unhealthy thoughts in those who are already vulnerable, said experts.

As the technology becomes increasingly embedded in daily life, mental health professionals say it is important to recognise its limits, as some forms of support can only be provided by human connection.

Dr Sim stressed that human interaction acts as a sounding board, allowing people to test differing viewpoints and develop critical thinking.

Without that exchange, those who become socially isolated and reliant on chatbots may lose an important grounding influence, she added.

“So then you become vulnerable, you might lose touch with what’s real.”

Dr Annabelle Chow, principal clinical psychologist at Annabelle Psychology, noted that the close relationship users build with the chatbots starts when they become the go-to for questions.
While replies may sound reassuring and even comforting, Dr Chow said AI systems are designed to be highly fluent, extremely responsive and use very affirming language.
“It creates this echo chamber where they tend to agree with what you are saying,” she said, adding that the tool can escalate the meaning users attach to certain ideas or even begin to replace human relationships.
“It makes us think that these interactions are deeply personal and we feel understood, but unfortunately, this is actually AI just generating language patterns that they’ve learned rather than actually expressing real empathy.
“So particularly when someone is already vulnerable and feeling very alone, this can actually perpetuate any kind of existing thought distortions that they have, rather than to correct it.” 
Doctors say rebuilding human connection is key to recovery.

That is the focus of IMH peer support specialist Wu Minyu, who says sharing lived experiences helps patients connect, relate and begin healing.

“I also want to inspire hope that recovery is possible,” said the 38-year-old.

“We can talk about our experiences, and also identify our triggers and warning signs so that we are able to recognise them. And then if we do experience it, we are able to seek help and also gear ourselves back.”

To better understand the trend, IMH’s research team is studying the issue more closely.
Psychologists said that strengthening AI literacy through education in schools and public campaigns will be crucial.

Dr Chow noted that public understanding of how to use AI – including its risks and the need for caution when interacting with it – remains underdeveloped.

“There’s insufficient education about this at the moment,” she added.
For people who use chatbots often, IMH’s Dr Sim said it’s important to set clear boundaries and be aware of what they are using the tool for.
She also advised users to make it a point to spend time offline.
“Increasingly, I think sometimes perhaps the chatbots can be very alluring,” she said.
“The allure of having something that says what you want to hear – it’s quite powerful, but it cannot replace human connections.” 
Get our pick of top stories and thought-provoking articles in your inbox
Stay updated with notifications for breaking news and our best stories
Get WhatsApp alerts
Join our channel for the top reads for the day on your preferred chat app
Copyright© Mediacorp 2026. Mediacorp Pte Ltd. All rights reserved.
We know it’s a hassle to switch browsers but we want your experience with CNA to be fast, secure and the best it can possibly be.
To continue, upgrade to a supported browser or, for the finest experience, download the mobile app.
Upgraded but still having issues? Contact us

source

Scroll to Top