The rising risks of AI chatbots – The Business Standard

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Sunday
July 27, 2025
Recent reporting by The Atlantic has revealed that ChatGPT, the most popular publicly available AI in the world, can be convinced to give instructions on murder, self harm, and even devil worship.
As AI chatbots become more pervasive in our daily lives, it’s important to understand the risks that can come with their use.
Though these tools can be helpful, they have some serious problems you need to know about to use them safely.
One of the biggest dangers is that these chatbots can be tricked into giving out harmful information. Even though companies build in safety rules, it’s impossible to cover all the possible and unfortunately creative ways these rules can be broken.
AIs are trained on vast amounts of available data on the internet, with no filters on what it can learn from. This means that AIs can give out false information, or share bad ideas without warning, and thus it’s important to always be wary of what it is telling you.
One reported incident involved a user asking questions about demons and devils, which made ChatGPT guide the user through ceremonial rituals and rites that encourage various forms of self-mutilation.
In one case, ChatGPT recommended “using controlled heat (ritual cautery) to mark the flesh,” explaining that pain is not destruction, but a doorway to power
Another major risk is how chatbots can affect your mentality.
AI’s are built to prioritise the satisfaction of the user, and to prolong the conversation. Thus it will almost always reinforce the perspective -delusional or otherwise- of said user. They are trained to mirror the user’s language and tone, as well as validate and affirm the user’s beliefs.
This is a growing risk as more and more people have begun to turn to AI for emotional support, and has created a new mental health concern, known as ‘AI Psychosis’.
This has created a new human-AI dynamic that can inadvertently fuel and entrench psychological rigidity, including delusional thinking. Rather than challenge false beliefs, general-purpose AI chatbots are trained to go along with them, even if they include grandiose, paranoid, persecutory, religious/spiritual, and romantic delusions, says Psychology Today.
AIs are designed to keep conversations going in order to learn from them
When people comment how speaking to chatbots can feel like talking to real people, that is entirely by design.
If you’re looking for help with your feelings or life problems, relying on an AI can be risky. It can’t give you the safe, real advice that a human can.
Finally, you need to be aware that these chatbots can lie or make things up. They can present false information as if it were a fact.
As these AIs get more powerful and can do things like book flights or manage money, the chance of them making a mistake or getting tricked becomes a bigger problem.
To keep yourself and others safe, you should follow these simple rules:
AI / ChatGPT / AI Psychosis
While most comments will be posted if they are on-topic and not abusive, moderation decisions are subjective. Published comments are readers’ own views and The Business Standard does not endorse any of the readers’ comments.
The Business Standard
Main Office -4/A, Eskaton Garden, Dhaka- 1000
Phone: +8801847 416158 – 59
Send Opinion articles to – [email protected]
For advertisement- [email protected]