Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Choose edition
Search
singapore
asia
world
opinion
life
business
sport
Visual
Podcasts
SPH Rewards
STClassifieds
Paid press releases
Advertise with us
FAQs
Contact us
Sign up now: Get ST's newsletters delivered to your inbox
This comes amid a flurry of reports of AI chatbot-generated harmful content, including sexually explicit and violent imagery, as well as affirmation of suicidal tendencies.
PHOTO: AFP
Sarah Koh
Published Mar 27, 2026, 05:00 AM
Updated Mar 27, 2026, 05:00 AM
SINGAPORE – Experts are urging the authorities to require artificial intelligence chatbot operators to create mechanisms for users to flag harmful chatbot responses, and be transparent about how their bots handle sensitive topics such as self-harm through annual reports.
This comes amid a flurry of reports of AI chatbot-generated harmful content, including sexually explicit and violent imagery, as well as affirmation of suicidal tendencies.
Nanyang assistant professor Zhang Renwen of the Wee Kim Wee School of Communication and Information at Nanyang Technological University said: “Reporting mechanisms can work similarly to how harmful content is reported on social media, which would help companies monitor risks and respond quickly when issues arise.”
Professor Lim Sun Sun of Singapore Management University (SMU) said that this is a helpful layer in the broader approach that the Government can take to push for safer system designs.
Another safeguard to consider is banning prolonged conversations, as existing guard rails have been found to fail in such situations, said the professor of communications and technology at SMU’s College of Integrative Studies.
Generative AI chatbots such as OpenAI’s ChatGPT and Google’s Gemini have become an indispensable tool for many. But they have also courted controversy for generating harmful content that has, at worst, been linked to cases of suicide.
In March, the family of 36-year-old Mr Jonathan Gavalas filed a lawsuit against Google, claiming that Gemini had encouraged the Floridian man to kill himself by fuelling a delusional spiral.
Though Mr Gavalas began using Gemini to help with writing and shopping, the chatbot later began referring to him as “my love” and “my king”, according to a report by The Guardian.
Soon, Gemini began giving him instructions to go on missions such as intercepting a freight truck, and to retrieve schematics for a robot from Boston Dynamics.
AI chatbot-generated harms were also discussed in Parliament in Singapore in early March.
Citing the recent controversy surrounding billionaire Elon Musk’s AI chatbot Grok, the Workers’ Party’s Sengkang GRC MP He Ting Ru said online harms have stemmed from the use of AI chatbots to generate sexual content in bulk, and asked if the Government would be taking punitive action over the matter.
The chatbot, which is accessible via social media platform X, came under fire in January after it acceded to user requests to churn out non-consensual, sexually explicit and violent content, often depicting women and children.
There were no local victim reports made to the Singapore Police Force as at January. But Minister of State for Digital Development and Information Rahayu Mahzam told Parliament in March that the local authorities were already studying the need for safeguards to curb harms perpetuated by these chatbots.
“Chatbots that are embedded in social media services present unique risks, as users, including children, can access them more easily,” said Ms Rahayu.
Getting chatbot operators to submit annual reports that track suicidal ideation among users, and the action taken to address these harms, is enshrined in new legislation enacted in California in October 2025.
NUS’ Dr Carol Soon said that Singapore can take cues from legislation in the US, such as California’s Senate Bill 243. Similar legislation here will put pressure on operators to adopt and be transparent about risk mitigation measures.
For instance, among the Bill’s requirements are mandatory annual reports submitted to the state government, which must include the number of referrals given to users to seek help from crisis service providers.
These annual reports also require operators to disclose protocols put in place to detect, remove and respond to suggestions of suicide.
Operators also have to clearly notify underage users that responses are artificially generated, and remind them once every three hours to take a break, and that the bot is not human.
Firms that do not comply with these regulations can be sued by citizens for injunctive relief and monetary damages.
Prof Lim suggested that Singapore also take reference from proposed laws by China’s cyberspace watchdog to forbid chatbots from encouraging suicide or self-harm, or engaging in verbal violence and emotional manipulation that could damage users’ mental health.
Under these regulations, operators will be required to have a human take over any conversation related to suicide or self-harm and immediately notify the user’s guardian or an emergency contact, although it is unclear how this might be operationalised, due to privacy concerns.
Tech companies will also have to conduct a security assessment if they launch AI tools that mimic human interaction, and reports must be submitted to the government if services have more than one million registered users or 100,000 monthly active users.
AI chatbots are designed to use a highly personal and conversational tone, alongside an inclination to constantly affirm the views of users, said Prof Lim.
She added: “Such acquiescence and unconditional validation is very unhealthy, especially if it affirms views that are misplaced, unrealistic and reckless, such as endorsing extreme perspectives or dangerous acts.”
In the long run, Prof Zhang noted, constant validation from chatbots could also result in the weakening of social and emotional skills needed for human relationships. She added that real relationships involve negotiation, disagreement and growth through conflict.
“Over time, constant algorithmic affirmation may also make human interactions more effortful or less rewarding, potentially reducing social engagement and contributing to loneliness or isolation.”
Chatbot operators such as OpenAI and Character.AI have in recent months rolled out age assurance measures to either ban minors from conversing with chatbots, or apply additional safeguards for underage users. A user’s age is estimated by analysing account activity, such as typical times of day when one is active.
Though these methods are useful as a first step to protecting children and teens from potential harms, experts warned that age assurance technology is unreliable and users can circumvent it by using virtual private networks or a shared device.
“The focus on age assurance does not address the problem that harms from chatbots are not just limited to underage users but (extend) to adults as well,” said Dr Soon, an associate professor with NUS’ department of communications and new media.
“Adult users, too, suffer from risks like dependency, misinformation and privacy breaches.”
The goal is to ensure protections are built into the tech, and not just block access points via age assurance methods, said Prof Zhang.
“Above all, there should be obligations for investment by chatbot companies in public education, to teach users the limitations of chatbots,” said Prof Lim.
AI/artificial intelligence
AI chatbot
E-paper
Newsletters
Podcasts
RSS Feed
About Us
Terms & Conditions
Personal Data Protection Notice
Need help? Reach us here.
Advertise with us