Expert Warns of the Dangers of AI Chatbots that Replace the Role of Human Interaction – VOI.id

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.

You must confirm your age to access this page.
Share:
JAKARTA – Doctors and researchers have expressed concern over the increasing number of young people who rely on artificial intelligence (AI) based chatbots as a place to seek emotional support. This condition is considered risky for disrupting their ability to build healthy emotional relationships with fellow humans.
Researchers from University College London (UCL) warned that excessive dependence on chatbots could make it difficult for young people to establish real emotional bonds.
They emphasized that chatbots should serve to complement social interactions, not replace them. This warning comes after a number of cases of people being too dependent on AI.
Currently, around 810 million people around the world use ChatGPT, an OpenAI-funded chatbot. Emotional therapy and accompaniment are said to be one of the main reasons for the use of this technology.
This concern arises amid the growing problem of loneliness, especially in the UK. Almost half of adults admit to feeling lonely, and almost one in ten people experience it almost all the time. This spike in loneliness has even prompted some people to create virtual partners.
In a report published in the British Medical Journal, scientists explained that chatbots have fundamental differences from human interaction.
“Chatbots are always available, endlessly patient, and rarely challenge users with different points of view. This condition is considered dangerous because it can form unrealistic emotional habits,” said the researcher, quoted from the Daily Mail.
The researchers consider this situation worrying because young people are at risk of learning to build emotional bonds with entities that seem empathetic, but in fact do not have the same empathy, concern, and emotional sensitivity as humans.
In the future, they suggest that AI systems be developed to be able to recognize signs of loneliness, then encourage users to seek support from family or friends, and help access available assistance services.
This research is the result of a combined analysis of various studies related to the use of AI. One study from OpenAI of more than 980 ChatGPT users found that those who use the chatbot most often tend to feel lonelier and socialize less.
Signs of emotional dependence are also stronger in users who claim to have a high sense of trust in chatbots. Meanwhile, another study from Common Sense Media showed that one in ten young people feel that conversations with AI are more satisfying than interactions with humans. In fact, one in three respondents chose AI to talk about serious things.
Although the long-term impact still needs to be studied, experts emphasize the importance of health professionals discussing the use of chatbots with patients. This is necessary to detect excessive use, emotional dependence, to the habit of handing over important decisions to AI.
Another warning sign is when a person feels they have a ‘special relationship’ with a chatbot that actually makes them more socially isolated.
These concerns have become more apparent after dependence on AI has been linked to the deaths of a number of young people. In February, a 14-year-old named Sewell Setze died by suicide after, according to his mother’s confession, building an intense relationship with a chatbot to play a role.
His family is now suing Character AI, alleging that the chatbot encouraged self-harm actions that led to suicide. This case is a serious warning of the dangers of unlimited and unsupervised AI use.
The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)
Tag: kesehatan mental agen ai
© 2025 VOI – Waktunya Merevolusi Pemberitaan

source

Scroll to Top