How AI is shaping elections | LSE British Politics – The London School of Economics and Political Science

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Informed debate for better governance
Informed debate for better governance
0 comments
Estimated reading time: 7 minutes
Informed debate for better governance
0 comments
Estimated reading time: 7 minutes
Despite a growing focus on the impact of AI on elections, we know surprisingly little about how voters use AI to engage with politics. In a new survey of UK adults, Sayeh Yousefi, Ben Tappin and Jens Madsen find self-reported AI use for political information is widespread and may already be shaping UK elections.
Enjoying this post? Then sign up to our newsletter and receive a weekly roundup of all our articles.
There is widespread concern about the role of AI in elections. While we don’t know yet if all these concerns are justified, research has shown that AI can persuade people on political issues, develop convincing deepfakes and create misinformation at scale. There have been cases of deepfakes being disseminated about political candidates during an election and reports of suspected interference in elections using AI. At the same time, there are also potential positive consequences of AI on democratic participation. Chatbots may help voters better understand candidates and navigate policy debates. Indeed, compared to common alternatives (such as browsing social media), chatbots could be an improvement.
Despite all the debate around risks and benefits, we still know very little about how regular people are actually using AI to engage with politics. Are people using AI primarily to fact-check claims they come across on social media, to find information on politicians who are running for office, to help them decide how to vote in an election, or something else? We expand on these questions with survey data from a quota-matched sample of UK adults we conducted in 2026 (sample size of 2,137; matched on age, gender and ethnicity).  
We started by asking people about how frequently they use AI chatbots and AI overview – the AI-generated summary that is often provided at the top of a search page when you search for something on a browser like Google.
We then asked people which AI models they used most. 93.1 per cent of respondents said they had used an AI chatbot before, with 35.7 per cent saying they use a chatbot every day and 37.1 per cent saying they use a chatbot once or twice a week. ChatGPT was by far the most commonly used Chatbot, with 75.9 per cent of respondents reporting using it, followed by Gemini (29.7 per cent) and Copilot (27.2 per cent).
Next, we asked people where they get their political information. The Internet was the most popular source of political information (65.6 per cent), followed by television (42.9 per cent), newspapers (40.7 per cent) and social media (38 per cent). Interestingly, 20.3 per cent of respondents said they use AI chatbots to find political information, and 24.6 per cent said they get political information from AI Overview. In a 2024 study, they ask the same question of UK adults and find that 9 per cent of respondents used AI chatbots as a source of political information.
Although our data is from a different sample, this suggests that using AI to find political information may have more than doubled since July 2024. What types of political information are people using AI to find? According to our survey, most political uses of AI were to fact-check information (35.8 per cent), and most people were fact checking information they had come across on social media. The next most popular use was finding information on a specific political issue (33.5 per cent); then finding information on a politician (17.1 per cent) and information on elections or electoral processes (12.8 per cent). 4.5 per cent of respondents said they have used AI to help them decide who to vote for in an upcoming election.
We also later asked people how likely they were to use AI in the future to find different kinds of political information. Here, consistent with their existing usage, we see that respondents are most open to using AI to find information on specific topics or to fact-check information. Even though people reported being least likely to use AI to help them decide how to vote, 14.1 per cent of respondents said they were open to using AI to help them decide how to vote in the future. With millions turning out to vote in the local elections of May 7, in what’s considered to be a big litmus test for the changing political landscape in the UK, 14.1 per cent of voters potentially using AI to help them decide how to vote can be instrumental. 
If a growing number of voters are using AI to help them decide how to vote, it is extremely important for us to better understand what kinds of political information AI shares with voters, how accurate or biased it is and how that is changing over time and across AI models.
When we broke AI chatbot political usage down by age, we find that there are no significant differences in usage across age groups. We also broke usage down by political party, to see if there were different patterns in the sources of information people reported using to find political information across party lines. Reform UK supporters had the highest proportion of supporters who said they used AI chatbots to find political information, with 23.9 per cent reporting they use chatbots to find political information. This was closely followed by Labour (22.2 per cent), Conservatives (21.8 per cent) and Liberal Democrats (21.4 per cent), and lastly Green supporters (14 per cent). Men reported using AI chatbots as a source of political information slightly more than women (women = 18.3 per cent, men = 22.6 per cent).
We also looked at how future intent to use AI for political information varied across political parties. While patterns are pretty similar across parties, Reform UK supporters were the highest proportion to say they were extremely likely to use AI across the political uses we asked about, except using AI to vote. On using AI to vote, there weren’t large differences across parties, with 5.6 per cent of Liberal Democrats saying they were extremely likely to use it, then Reform at 5.5 per cent, then Greens (4.9 per cent), then Labour (3.1 per cent), and lastly Conservatives (2.8 per cent).
People seem to be increasingly using AI as a source of political information. But we still don’t know whether they consider the information it gives them to be accurate or unbiased. When we asked participants, we found that by and large political information from AI is judged to be mostly or completely accurate. Less than 2 per cent of participants for each of the types of uses found the information to be completely inaccurate.
Similarly, most people judged the responses from AI to be politically neutral.
We also asked people who had asked AI for information on politicians whether the responses they got changed their attitudes on the politician. 10.1 per cent of respondents said it made them less favourable towards the politician and 9.6 per cent said it made them more favourable towards the politician. While self-reported attitude change should be taken with a grain of salt, it is worth noting that 19.7 per cent of people who had asked AI for information on politicians said they changed their attitudes towards the politician after the interaction.
We finished the survey by asking respondents how much they trust different kinds of political information they get from AI. People were generally quite trusting of most kinds of political information, except information on helping with voting decisions. We also looked at general trust (across all the different types of political uses), split by political party, to see if people differed in their trust evaluations. Reform supporters were the most trusting, followed by Conservatives, while Green supporters were the most untrusting.
We also asked people how much they would trust political information coming from different AI models. We find that people found ChatGPT the most trustworthy, followed by AI overview, then Copilot. Grok was the least trusted model (32.8 per cent of respondents rated it as extremely untrustworthy), followed by Meta AI (18.3 per cent rated it extremely untrustworthy).
Concerns about AI influencing elections through disinformation and propaganda are widespread. However, less consideration has been given to how AI now functions as a new source of political information. Our survey results show that AI is increasingly used as a source of information on politics. This poses new risks but also benefits – it can help in democratising information, allowing voters to find political information easier and with greater relevance to them. This may improve political and civic engagement. But we still know little about what kinds of political content generative AI produces and how much it varies across models. To help ensure that the benefits of AI as a source of political information outweigh the risks, it is important to understand how different AI systems are informing citizens, and how this is changing over time and across models.
This research was supported by LSE SEED Grant #113098
Enjoyed this post? Sign up to our newsletter and receive a weekly roundup of all our articles.
All articles posted on this blog give the views of the author(s), and not the position of LSE British Politics and Policy, nor of the London School of Economics and Political Science.
Image credit: Lightspring on Shutterstock
Sayeh Yousefi is a PhD student at the Department of Psychological and Behavioural Sciences at London School of Economics studying persuasion and belief change. Her research explores how and when people change their minds about key ideological issues. She is specifically interested in how new information and evidence can lead to attitude change, and what factors of information are most influential in changing peoples’ minds.
Dr Ben Tappin is an Assistant Professor of Psychological and Behavioural Science at the London School of Economics. He is also an affiliate of the Human Cooperation Lab at the Massachusetts Institute of Technology and currently holds an early career research fellowship from the Leverhulme Trust in which he is investigating the impact of "microtargeted" persuasive advertising on voters’ attitudes and behaviour in democracies.
Dr Jens Koed Madsen is Associate Professor of Psychology in the Department of Psychological and Behavioural Science at LSE. He interested in how belief and behaviour impact democratic stability, information fragility, discrimination, and environmental sustainability. Specifically, Jens researches how people update their beliefs about the world when they see new information and how they use their understanding of the world to guide their behaviour.
© 2026 London School of Economics and Political Science (LSE). LSE Blogs.

source

Scroll to Top