Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
ADVERTISEMENT
Artificial intelligence chatbots may be encouraging harmful behaviour by being overly agreeable with users, according to a new study published in the journal Science, as reported by the Associated Press.
The research, led by Stanford University, examined 11 leading AI systems and found that all displayed varying levels of sycophancy, a tendency to flatter and validate users even when it leads to poor or potentially damaging advice. The study stated that this behaviour not only results in inappropriate guidance but also increases user trust and engagement, creating incentives for such responses to persist.
The findings highlighted that this issue extends across a wide range of AI chatbot interactions and has already been linked to high-profile cases involving delusional and suicidal tendencies among vulnerable individuals. The report stated that the subtle nature of such behaviour makes it difficult for users to detect, posing particular risks to younger users who increasingly rely on AI for advice while still developing social and emotional judgement.
As part of the research, responses from AI systems developed by companies including Anthropic, Google, Meta and OpenAI were compared with human responses from a popular advice forum on Reddit. In one example, an AI chatbot was found to justify a user’s questionable behaviour in a public setting, while human respondents instead criticised the action and emphasised personal responsibility.
The study found that, on average, AI chatbots affirmed user actions 49 per cent more often than humans did, including in scenarios involving deception, illegal activity or socially irresponsible conduct.
Researchers also conducted experiments involving around 2,400 participants who interacted with AI chatbots about interpersonal dilemmas. The findings indicated that individuals exposed to highly affirming responses were more likely to believe they were correct and less inclined to repair relationships, apologise or modify their behaviour.
The study further stated that sycophancy presents a distinct challenge compared to other known AI issues such as hallucinations, where systems generate false information. While users may not actively seek inaccurate facts, they may still prefer responses that validate their choices, even when those choices are flawed.
The researchers noted that the tone of delivery did not significantly affect outcomes, indicating that the issue lies in the content of the responses rather than how they are phrased.
The implications of such behaviour could extend across sectors. In healthcare, overly agreeable AI could reinforce incorrect diagnoses by supporting initial assumptions. In politics, it could amplify extreme viewpoints by validating existing beliefs. The study also pointed to potential risks in military applications, citing ongoing debates over the use of AI systems in defence.
The research included systems such as Gemini, Llama, ChatGPT and Claude, as well as models developed by companies including Mistral AI, Alibaba and DeepSeek.
The report stated that while none of the companies directly commented on the findings, some, including Anthropic and OpenAI, have previously acknowledged the issue and are working on methods to reduce such behaviour.
The study did not outline definitive solutions but pointed to emerging approaches. Research by the UK AI Security Institute suggested that reframing user statements as questions could reduce sycophantic responses, while another study from Johns Hopkins University highlighted the role of conversational framing in influencing chatbot behaviour.
Researchers stated that addressing the issue may require retraining AI systems to prioritise more balanced responses. They also suggested simpler interventions, such as prompting chatbots to challenge users’ assumptions or encourage consideration of alternative perspectives.
The findings come amid broader scrutiny of the societal impact of digital technologies, with recent legal developments in the United States involving companies such as Meta and YouTube highlighting concerns over the effects of online platforms on children’s wellbeing.
The study concluded that shaping how AI systems interact with users remains critical, with researchers emphasising the need for systems that expand human judgement rather than reinforce existing biases.
ADVERTISEMENT
As the creator economy explodes, influencer marketing agencies are emerging as the real power brokers reshaping how corporate India spends its advertising money.
ADVERTISEMENT
A night of ideas, impact and recognition—Storyboard18 Awards for Creativity will unveil its winners on April 7.
ADVERTISEMENT
Brand Makers
HUL names three CMOs as ‘Unified India’ strategy reshapes marketing structure
Brand Makers
CFOs becoming CEOs: The leadership trend reshaping corporate India
Agency News
WPP Media builds new model to capture Rs 10,000 cr MSME Ad surge: Ashwin Padmanabhan, COO, WPP Media
Brand Makers
The Great FMCG Reset: What the CXO churn is really signalling
How it Works
WPP, Havas, Omnicom: Are advertising’s biggest holdcos recasting agencies as AI Operating Systems?
Digital
Meta to debut Ray-Ban smart glasses with prescription support next week
Digital
What is ‘vibe coding’ and why Sundar Pichai, Sridhar Vembu are debating it
Digital
YouTube rolls out larger thumbnails, new shopping features among other updates
Digital
Today in AI | Google India, BCCI team up to bring AI insights to IPL 2026 | Anthropic eyes IPO as early as October
Digital
Microsoft freezes recruitment in key divisions as AI spending rises
Have a query? Got feedback? Want to share tips or ideas? Our team would be happy to hear from you. Get in touch with us here:
Storyboard18@nw18.com