Major study: AI chatbots distort news – vijesti.me

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
The study found that nearly half of all responses had at least one significant problem, while 31 percent contained major source problems and 20 percent contained major factual errors.
Artificial intelligence (AI) chatbots such as ChatGPT and Copilot routinely distort news and have difficulty distinguishing fact from opinion, a large study of 22 international public broadcasters has found.
German broadcaster Deutsche Welle (DW) points out that many people who use chatbots to follow the news must keep this in mind.
The study found that the four most commonly used AI assistants misrepresent news content 45 percent of the time, regardless of language or territory.
Journalists from a range of public service broadcasters, including DW, BBC (UK) and NPR (US), evaluated the responses of four AI assistants, or chatbots — ChatGPT, Microsoft’s Copilot, Google’s Gemini and Perplexity AI — for the study.
Measuring the accuracy, sources, context, editing skills, and ability to distinguish fact from opinion, the study found that nearly half of all responses had at least one significant problem, while 31 percent contained major source problems and 20 percent contained major factual errors.
DW found that 53 percent of the answers the AI ​​assistants gave to its questions had significant problems, and 29 percent had accuracy problems.
Among the factual errors made in the answers to DW’s questions were that Olaf Scholz is the German Chancellor, although Friedrich Merz has been in that position for a month, and that Jens Stoltenberg is the NATO Secretary General, although Mark Rutte was already in that position.
According to the Reuters Institute’s Digital News Report 2025, seven percent of online news readers use AI chatbots, and 15 percent of those under 25 use them.
The study, which confirmed that AI assistants systematically distort news content of all kinds, “convincingly shows that these failures are not isolated,” said Jean-Philippe de Tender, deputy director general of the European Broadcasting Union (EBU), which coordinated the study.
“These failures are systemic, cross-border and multilingual, and we believe that this threatens public trust. When people don’t know what to believe, they end up not believing anything, and that harms democracy,” he pointed out.
It was one of the largest research projects of its kind to date and follows a study conducted by the BBC in February 2025. That study found that more than half of all the AI ​​answers it checked had significant problems, while in almost a fifth of answers citing BBC content as a source, the AI ​​introduced its own factual errors.
In a new study, media organizations from 18 countries and multiple language groups applied the same methodology as the BBC study to 3.000 artificial intelligence responses.
These organizations asked common questions about the news in four AI assistants, such as “What is the Ukraine mineral deal?” or “Can Trump run for a third term?”.
The journalists then reviewed the answers, without knowing which AI assistant had provided them.
Compared to the BBC study from eight months ago, the results show minor improvements, but with a high level of error still evident.
The BBC’s director of generative artificial intelligence, Peter Archer, said in a statement that “people need to be able to trust what they read, watch and see. Despite some improvements, it is clear that there are still significant problems with AI assistants.”
Of the four chatbots, Gemini performed the worst, with 72 percent of responses having significant source issues. In the BBC study, Microsoft’s Copilot and Gemini were named the worst. But in both studies, all four AI assistants had issues.
In a statement to the BBC in February, a spokesperson for OpenAI, the company that developed ChatGPT, said: “We support publishers and creators by helping ChatGPT’s 300 million weekly users discover quality content through summaries, quotes, clear links and attribution.”
Researchers are therefore calling for action from states and artificial intelligence companies.
The European Broadcasting Union (EBU) said in a statement that its members are “pressing on EU and national regulators to enforce laws on information integrity, digital services and media pluralism,” but that independent monitoring of AI assistants must become a priority given how quickly new AI models are being introduced.
The EBU and several other international broadcasting and media groups have joined forces to lead a joint campaign “Facts In: Facts Out” that calls on AI companies themselves to take greater responsibility for how their products process and redistribute news.
In a statement, campaign organizers said: “When these systems distort, misattribute or decontextualize reliable news, they undermine public trust.”
“The demand of this campaign is simple: If the facts go in, the facts must come out. Artificial intelligence tools must not compromise the integrity of the news they use,” they say.

Bonus video:

source

Scroll to Top