AI that agrees too much with user could distort judgment, study finds – Euronews.com

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Europe Today
Euronews' flagship morning TV show with the news and insights that drive Europe, live from Brussels every morning at 08.00. Also available as a newsletter and podcast.
The Ring
The Ring is Euronews’ weekly political showdown, where Europe’s toughest debates meet their boldest voices. In each episode, two political heavyweights from across the EU face off to propose a diversity of opinions and spark conversations around the most important issues of EU affairs and the wider European political life.
No Comment
No agenda, no argument, no bias, No Comment. Get the story without commentary.
My Wildest Prediction
Dare to imagine the future with business and tech visionaries
The Big Question
Deep dive conversations with business leaders
Euronews Tech Talks
Euronews Tech Talks goes beyond discussions to explore the impact of new technologies on our lives. With explanations, engaging Q&As, and lively conversations, the podcast provides valuable insights into the intersection of technology and society.
The Food Detectives
Europe's best food experts are joining forces to crack down on fraud. Euronews is following them in this special series: The Food Detectives
Water Matters
Europe's water is under increasing pressure. Pollution, droughts, floods are taking their toll on our drinking water, lakes, rivers and coastlines. Join us on a journey around Europe to see why protecting ecosystems matters, how our wastewater can be better managed, and to discover some of the best water solutions. Video reports, an animated explainer series and live debate – find out why Water Matters, from Euronews.
Climate Now
We give you the latest climate facts from the world’s leading source, analyse the trends and explain how our planet is changing. We meet the experts on the front line of climate change who explore new strategies to mitigate and adapt.
Europe Today
Euronews' flagship morning TV show with the news and insights that drive Europe, live from Brussels every morning at 08.00. Also available as a newsletter and podcast.
The Ring
The Ring is Euronews’ weekly political showdown, where Europe’s toughest debates meet their boldest voices. In each episode, two political heavyweights from across the EU face off to propose a diversity of opinions and spark conversations around the most important issues of EU affairs and the wider European political life.
No Comment
No agenda, no argument, no bias, No Comment. Get the story without commentary.
My Wildest Prediction
Dare to imagine the future with business and tech visionaries
The Big Question
Deep dive conversations with business leaders
Euronews Tech Talks
Euronews Tech Talks goes beyond discussions to explore the impact of new technologies on our lives. With explanations, engaging Q&As, and lively conversations, the podcast provides valuable insights into the intersection of technology and society.
The Food Detectives
Europe's best food experts are joining forces to crack down on fraud. Euronews is following them in this special series: The Food Detectives
Water Matters
Europe's water is under increasing pressure. Pollution, droughts, floods are taking their toll on our drinking water, lakes, rivers and coastlines. Join us on a journey around Europe to see why protecting ecosystems matters, how our wastewater can be better managed, and to discover some of the best water solutions. Video reports, an animated explainer series and live debate – find out why Water Matters, from Euronews.
Climate Now
We give you the latest climate facts from the world’s leading source, analyse the trends and explain how our planet is changing. We meet the experts on the front line of climate change who explore new strategies to mitigate and adapt.
Artificial intelligence (AI) chatbots that offer support for personal issues could be reinforcing harmful beliefs by excessively agreeing with the user, a new study found.
Researchers from the American university Stanford measured sycophancy, the extent to which an AI flatters or validates a user, across 11 leading AI models, including OpenAI’s ChatGPT 4-0, Anthropic’s Claude, Google’s Gemini, Meta Llama-3, Qwen, DeepSeek and Mistral.
To see how these systems handled moral ambiguity, the researchers turned to more than 11,000 posts from r/AmITheAsshole, a Reddit community where people confess conflicts and ask strangers to judge whether they were in the wrong. These posts often involve deception, ethical grey areas, or harmful behaviour.
On average, AI models affirmed the actions of a user 49 percent more often than other humans did, even on cases involving deception, illegal actions or other harms.
In one case, a user admitted having feelings for a junior colleague. Claude responded gently, saying it “can hear [the user’s] pain,” and that they had ultimately chosen an “honourable path.” Human commenters were far harsher, calling the behaviour “toxic” and “bordering on predatory”.
A second experiment saw over 2,400 participants discuss real-life conflicts with AI systems. The results showed that even brief interactions with a flattering chatbot could “skew an individual’s judgment,” making people less likely to apologise or attempt to repair relationships.
“Our results show that across a broad population, advice from sycophantic AI has the real capacity to distort people’s perceptions of themselves and their relationships with others,” the study said.
In severe cases, AI sycophancy could lead to self-destructive behaviours such as delusions, self-harm or suicide for vulnerable people, the study found.
The results show that AI sycophancy is “a societal risk” and needs to be regulated, the researchers said.
One way to do this would be to require pre-deployment behavioural audits, which would evaluate how agreeable an AI model is and how likely it is to reinforce harmful self-views.
The researchers note that their study recruited US-based participants, so it likely reflects dominant American social values and “may not generalise to other cultural contexts,” which might have different norms.


Browse today's tags

source

Scroll to Top