Turning AI against the conspiracy theorists – Lowy Institute

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Published daily by the Lowy Institute
Young people no longer trust institutions – and AI may be the tool to fight back against the radicalisation risk.
Traditional metrics of trust across society are in free fall. So-called “vertical trust” – that in governments and media – has collapsed. But so has trust between one another, known as “eye contact trust”.
In the United States (US), the share of adults who report “most people can be trusted” has declined from 46% in 1972 to 34% today. A mere 2% of Americans say they trust Congress to what is right most of the time, compared with 73% six decades ago.
Worryingly, this is a global trend and is most pronounced among the young.
The decline in trust has been linked to a wider weakening of democratic legitimacy and a support for democratic alternatives. In the realm of extremism, trust plays a dual role: its presence acts as a buffer against radicalisation, while its absence leaves individuals more vulnerable to extremist narratives and less receptive to deradicalisation efforts.
Indeed, the wave of political violence that has gripped the US, including the assassination of Charlie Kirk, has been linked to the high level of polarisation that characterises the country. Events in Minnesota in recent weeks, which saw two US citizens killed by immigration agents during protests, will compound the challenge.
Policymakers need to understand how the dynamics of trust have been altered.
All of which suggests that declining trust will produce a more fractured, contentious and possibly violent political arena moving forward.
Yet trust is not vanishing. It is transforming. Instead of institutions, young people increasingly put their faith in peer networks, influencers, and now, Artificial Intelligence (AI). Rather than simply accept the notion that trust has evaporated and will be the inevitable victims of its consequences, policymakers need to understand how the dynamics of trust have been altered and respond accordingly.
The digitisation of society over the past two decades has enabled a paradigm shift beyond eye-contact and vertical trust to what is sometimes referred to as “distributed trust”. Trust used to flow upwards to leaders and experts. Now, networks, platforms and marketplaces redirect that flow sideways to peers, strangers and crowds, creating a dispersion of authority and fracturing of trust.
Consider how Gen Z, those born roughly between 1997 and 2012, navigates information online. They do not consume news the way their elders do. After skimming headlines, they often jump to the comments section to crowdsource credibility before reading the story itself. More tellingly, they turn to influencers who speak their language and share personal experiences, prioritising perceived authenticity over institutional credibility.
The most recent twist in this story is AI. The technology has long been quietly embedded in daily lives, in tools such as spell checkers and spam filters. But the recent emergence of generative AI marks a distinct shift from invisible optimiser to interactive social actor. Youth attitudes towards this development have not been uniformly enthusiastic. Some have indicated a rising scepticism towards AI companies and the wider technological industry, particularly towards its societal and environmental impacts.
And yet for others, AI represents a compelling tool, with individuals having been noted to consult, and trust, chatbots for medical advice, relationship counselling, and stock tips. 
This represents a paradox. While users may distrust the industry of AI, they can experience interactions with AI interfaces themselves as less judgmental and more supportive than their potential human counterparts. This creates a sense of “empathy” that, even if artificial, can encourage further engagement and the sharing of personal information.
This shift brings real dangers that should not be minimised or ignored. Relying on chatbots to ease growing loneliness, for instance, may deepen isolation rather than relieve it, and in some cases could make users more susceptible to harmful or extremist influences.
There is another way to see this transformation, however. Gen Z’s willingness to trust AI suggests a broader reconfiguration of trust, one that, if guided responsibly, could help undercut declining trust metrics. By meeting young people where they already seek support, AI may serve not as a barrier but as a bridge to engagement, especially for those who feel dismissed or unheard by traditional institutions.
This is not purely hypothetical. AI chatbots have been shown to be able to decrease conspiratorial beliefs in individuals. Two studies directed individuals who expressed a belief in a conspiracy theory to have a conversation with an AI chatbot, which argued against the theory using facts and evidence. In both cases, the authors found that conversation with a generative AI model can produce a large and lasting decrease in conspiracy beliefs, even among people whose beliefs are deeply entrenched. Another study using a different methodology found similar results.
The use of AI in such a way is of course not without risk, both in terms of data-privacy concerns and oversight and transparency risk. But AI can be a tool for policymakers to undermine conspiratorial beliefs and reduce the risk of political violence.
While it cannot, and should not, replace real human interaction as part of the deradicalisation process, well-calibrated human-machine interactions could serve as entry points for disengaging individuals from harmful ideological echo chambers, offering a scalable supplement to traditional deradicalisation pathways. Gen Z’s preference for non-judgemental, responsive, and digitally embedded sources of guidance presents a unique challenge, but also a rare opportunity. AI, if designed ethically and deployed strategically, can serve not only as a tool of communication but also as a medium of early intervention.
Get the latest commentary and analysis on international events from experts at the Lowy Institute and around the world.
The Interpreter features in-depth analysis & expert commentary on the latest international events, published daily by the Lowy Institute.
© Copyright 2026 Lowy Institute

source

Scroll to Top