#Chatbots

How Newsrooms Are Using AI Chatbots to Leverage Their Own Reporting — and Build Trust – Global Investigative Journalism Network (GIJN)

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Image: Shutterstock
Ask a major AI chatbot or internet search engine a question such as: “How did the US bombing of Iran affect global oil prices?” and you’ll get a slew of confident-sounding figures, site links, and trends. The long answer from ChatGPT, for instance, begins: “Brent crude briefly jumped over 3%, reaching around $81–82/barrel — the highest since January.”
Ask the same question of an internal newsroom AI chatbot — such as the Washington Post’s “Ask The Post AI” — and you’ll typically get a key take-aways-type answer that is both simpler and more cautious, above thumbnails of cited source articles: “As of now, oil prices have fluctuated, with global crude down about 1%, but the situation remains uncertain.”
But the major difference is the trustworthiness of the information sources for these answers. Large language model (LLM) AI chatbots search almost the whole internet, including unvetted sources. Meanwhile, in the past few years, several major newsrooms from the Philippines to the UK have introduced internal generative AI chatbots that are supposed to answer reader questions based exclusively on their own site’s reporting and vetted source databases, using strict code guardrails designed to prevent misinformation or bias creeping in from outside.
Leading examples of these smart news interfaces include the Rai tool from Rappler in the Philippines, Ask FT at the Financial Times, Forbes’ Adelaide chat, and Ask The Post AI.
(Note: These are distinct from AI news summary tools that newsrooms such as The Wall Street Journal, Bloomberg, and Yahoo News have introduced, which don’t answer reader questions but rather assist busy readers by inviting them to click a tab to find AI-generated bullet point story takeaways.)
All of these chatbot tools are self-described as experimental, and capable of error, partly because they’re built on top of the digital scaffolding of LLMs, which are notorious for bias, error, and ‘hallucinations.’
But — given the problems of misinformation, false attribution, unsourced opinion, and bias that beset broader internet searches — they share the noble purpose of offering readers trustworthy information that those readers are already predisposed to trust, due to their loyalty to the media brands behind them. The answers are generated from fully reported and edited stories in the newsroom’s archive. And, despite AI chatbots’ potential flaws, experts are cautiously optimistic that they may offer newsrooms some solutions for greater reader engagement, youth reach, and even potential subscription-based revenue streams.
Image: Screenshot, Rappler / Rai
Rappler’s Rai began as a low-risk experiment: as an AI “moderator” for focus group discussions within the relatively safe space of a Rappler chatroom. In October 2024, it grew into a much more ambitious tool on the Rappler app: a conversation bot — rather than a summarization tool — that now draws only on the facts of more than 400,000 published Rappler stories and vetted datasets. According to Gemma Mendoza, head of digital services at Rappler, these include more than 350,000 articles published since Rappler’s founding in 2011, plus 10 years of in-depth and investigative stories from its predecessor, Newsbreak Magazine, and datasets on results, candidates, and voter rolls from the past five national and local elections in the Philippines.
Mendoza says that a central purpose for launching Rai was to incentivise readers to download and use Rappler’s mobile app, Rappler Communities.
“Social media has really deprioritized news, so how do you engage with your audiences; how do you have sustainable traffic and have your audience engaging with your content?” she asks. “The other way to sustain that relationship is through newsletters, which are not really a habit here in the Philippines. We have ambitious engagement targets for our app, and Rai is a way to interact with and surface our content in a more nuanced way.”
Every 15 minutes, the ‘knowledge graph’ database behind the system is supposed to add the latest reports and investigations to its arsenal of archived stories and source datasets, and Rai then offers informed and relatively up-to-date responses, along with several linked stories, to app users.
“The problem with AI in general is that it gets information from everywhere, and basically treats each point as if it’s the same,” Mendoza says. “We stand by our stories at Rappler and we vet the facts, and that’s the foundation of Rai.”
But newsrooms don’t have the large data engineering teams that commercial providers employ to maintain these complex systems, and failures are still common in this experimental era of newsroom conversation bots. For instance, for several weeks in July, Rai was not including the latest stories due to a problem with its update function, so some answers were out of date.
Despite an LLM infrastructural foundation, Mendoza says the latest, custom-built version of Rai includes innovative and hyper-strict guardrails to prevent outside information from seeping into results.
As the explainer note in the app states: “This is unlike other chatbots whose data sources include random websites whose content are not necessarily vetted. After all, garbage in, garbage out.”
“It’s really conversational — we’ve created an infrastructure on top of existing LLMs; it’s agnostic, so we can decide to move it to other LLMs if we need to,” she explained. “Answers are drawn from published stories and structured data.”
Mendoza adds: “We focus on domains that Rappler is really good at, starting with politics.”
Only available to “FT Professional” subscribers, the Ask FT generative AI tool pulls its answers from Financial Times archives dating back to 2004.
In May, Bill Grueskin, professor of professional practice at Columbia Journalism School, tested out this chatbot, and noted that while its facts are journalistically vetted some responses quickly reveal the limitations of the tool’s database, noting that “it can’t seem to answer questions like ‘How many cars did Tesla sell in 2024?’”
However, despite not being a regular AI user himself, Grueskin found several benefits from the chatbot.
“The interface is simple,” he says. “There’s a box into which you type your question — such as “What is the latest on the China/US trade war?”— and within seconds it can churn out a thoughtful, accurate summary answer, with footnotes and links to FT stories that informed its response.”
In an interview with GIJN, Grueskin says the newsroom AI model offers modest promise for news engagement, a better informed public, and much better archive search.
Ask FT pulls its answers from Financial Times archives dating back to 2004. Image: Shutterstock
“I do like the idea, in general,” he explains. “One reason is that most news sites’ search engines stink. Even the major outlets: they give you bad results, they don’t let you sort according to date. I get much better search results when I put the terms into Google along with “site:nytimes.com” for instance. If AI makes that a better experience, and answers are from one underlying database that involves edited stories already published — and it includes any later corrections, if the reporter got something wrong — then great.”
He added: “That’s assuming you can do it so the AI isn’t inserting hallucinatory quotes and facts.”
Dr Latanya Sweeney, head of the Public Interest Tech Lab at the Harvard Kennedy School, says one potential problem with newsroom chatbots beyond their limited databases is that AI programs may provide inadvertently misleading answers based on past coverage. For instance, answers about one government sector may be skewed by facts about newsworthy controversies on that topic. She says some newsroom interface tools may ultimately seek a middle ground between the rigid data limits of a single news archive and the broader internet involving archive collaborations between independent newsrooms.
“If the articles in your archive are narrowly specific on a topic, and only talk about one piece of it, then it may only answer the question from that perspective,” she explains. “There may be a better, properly sourced answer out there. There’s a lot of rigidity in using just your archive. So it might be beautiful to have ecosystems of trusted journalism archives [informing AI answers].”
Experts say their digital firewall against external sites greatly reduces the risk of fabricated facts by these internal chatbots, but does not eliminate it.
Responses from Rai are governed by Rappler’s normal editorial corrections policy, and readers are invited to point out any possible errors in chatrooms or by email. The outlet states: “We will endeavor to root out the cause of the mistake and correct the source material if needed.”
For many users, one of the most refreshing and confidence-inspiring features of newsroom chatbots is that they sometimes reply to a question, effectively, with “I don’t know.” If you ask “Who ordered the assassination of Sir Henry Wilson in London in 1922?” the Post’s chatbot admits: “Sorry, we couldn’t generate an answer.” But ChatGPT confidently replies with “Likely ordered by: Michael Collins, leader of the Irish Free State Provisional Government,” despite this being a highly contentious and divisive claim surrounding an event that triggered the Irish Civil War.
“[A] problem with AI is that it tends to try to answer questions even if it doesn’t have the vetted data to answer it, so guardrails need to be in place,” Mendoza cautions. “So Rai knows how to say ‘I don’t know this,’ often because it was not confident it could interpret the question correctly.”
At Rai’s launch, Rappler’s head of data and innovations, Don Kevin Hapal, suggested that the initiative represented a model for nonprofit newsrooms to emulate: “With Rai, we’re not just launching another AI tool, we’re defining how newsrooms can harness data and AI to build public trust. By anchoring Rai in the integrity of Rappler’s journalistic rigor, we’re showing that AI can be responsibly integrated to support truth in the face of disinformation.”
Prior to the advent of generative AI, some investigative newsrooms offered news answer interfaces based on interactive decision trees.
A prominent example was Chile’s LaBot, where — between 2017 and 2023 — users could use tabs to find a series of conversation-style answers about its own newsroom reporting via Telegram or Facebook Messenger, by choosing from a series of anticipated questions pre-written by journalists.
Francisca Skoknic, a co-founder of LaBot, says the popularity of that experimental chatbot illustrates the general appetite readers have for short, trusted conversations about facts.
“I think these [newsroom AI] chatbots make sense because, a lot of the time, people don’t have the time to read whole articles, but do want specific answers from a trusted brand — especially if it’s from an outlet with comprehensive, vetted information about current affairs in a country, like Rappler,” says Skoknic. “But it’s important that these tools are able to say ‘I don’t know,’ rather than trying to force answers. Chatbots can create an emotional connection, very different than with news.”
Image: Screenshot / Chile’s LaBot
Mendoza says the chatbot has also helped Rappler reporters prepare for news stories.
“Our editorial team has been using it a lot,” she notes. “For instance, our faith reporter does the groundwork for his stories, but, as a backup, he does these backgrounders that Rai helps facilitate for him, in terms of synthesizing what Rappler already knows.”
She adds: “One use-case for Rai is help with issues you’re not always watching. There have been a lot of marathon hearings about [former President] Duterte’s confidential funds, you need to be up to speed, and you can ask Rai for recent updates. “What were the concerns about the health insurance funds? Why was the budget slashed? Who is this character who is surfacing in the news? You can ask Rai.”
For other newsrooms considering launching their own AI chatbots, Mendoza warns that it is crucial for journalists to be integrally involved in the design goals.
“Yes, you need data engineers and developers, and to deploy these tools in phases,” she explains. “But it’s important that the engineers not be left to themselves. You need tight integration between engineering and content. You need an editorial mindset in the way these bots are engineered.”
Mendoza adds that newsrooms should start by defining the purpose of a possible chatbot, and developing a clear understanding of what the technology can and cannot do.
“Summarization can be plug-and-play, but conversational chatbots are not plug-and-play,” she explains. “It’s important to have that guardrail architecture there, otherwise there is the possibility of making things up.”
Rowan Philp is GIJN’s global reporter and impact editor. A former chief reporter for South Africa’s Sunday Times, he has reported on news, politics, corruption, and conflict from more than two dozen countries around the world, and has also served as an assignments editor for newsrooms in the UK, US, and Africa.
This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License
Republish our articles for free, online or in print, under a Creative Commons license.
This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License
Bellingcat fellow Dennis Kovtun tests the online geolocation capabilities of two popular AI chatbots — Microsoft’s Bing AI and Google’s Bard — and finds both have some serious drawbacks.
For newsrooms looking to deepen their understanding of how artificial intelligence could be used for investigative reporting, the 150-year-old Argentinian newspaper La Nación is blazing a trail and has produced a diverse range of stories assisted by AI technologies and has created an AI lab.
AI is revolutionizing the spread of falsehoods. But can this technology also be used to help debunk disinformation campaigns?
We used WhatsApp to crowdsource information from Uber and Lyft drivers in NYC about company lockouts during a local, minimum wage dispute.

source

How Newsrooms Are Using AI Chatbots to Leverage Their Own Reporting — and Build Trust – Global Investigative Journalism Network (GIJN)

Elon Musk to introduce ads to X’s

How Newsrooms Are Using AI Chatbots to Leverage Their Own Reporting — and Build Trust – Global Investigative Journalism Network (GIJN)

Before GPT-5 OpenAI Needs To Solve This