What Effect Can AI Chatbots Have on Voters? – InsideHook

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
As we spend more and more time online, we run the risk of encountering larger and larger amounts of online disinformation. This can have a significant impact on politics: at the end of 2024, the U.S. government sanctioned groups based in Iran and Russia over their efforts to mislead voters in the lead-up to that year’s election. Darren M. West of the Brookings Institution argued that disinformation efforts “were successful in shaping the campaign narrative” in part due to numerous avenues of online dissemination.

If you’ve spent any time on social media in recent months, you’ve probably seen heated debate over AI-generated videos espousing one political point of view or another. That isn’t the only way that AI technology can shape public opinion, however, and a pair of recently-published studies came to an alarming conclusion about the ways AI chatbots can influence voters’ opinions.

One study, published earlier this month in Nature, explored the way that chatbots attempted to influence voters in several elections, including both national elections (in Canada, Poland and the U.S.) and a local ballot measure. The researchers discovered something unsettling: “across all three countries, the AI models advocating for candidates on the political right made more inaccurate claims.” The other study, published in Science, explored the mechanisms by which AI chatbots could become more persuasive.

Cornell University professor Daniel Rand, who was involved in both studies, explained the nuances of this approach to persuasion in comments made to the Cornell Chronicle. “LLMs can really move people’s attitudes towards presidential candidates and policies, and they do it by providing many factual claims that support their side. But those claims aren’t necessarily accurate — and even arguments built on accurate claims can still mislead by omission,” he said.
As Nature‘s Max Kozlov pointed out in an article on these studies, one of their most unsettling findings was that how effective one method was for persuading voters: “flooding the user with information.” Unfortunately, this also makes sense: learning to separate fact from fiction and ask the right questions about the context of certain information is challenging enough; it’s increased even more when the volume of that information increases. How this will affect future elections remains to be seen — but it’s bound to affect them in some way, and that’s unnerving in its own right.
This article appeared in an InsideHook newsletter. Sign up for free to get more on travel, wellness, style, drinking, and culture.
Subscribed? Log In.
Log In.
Suggested for you
News, advice and insights for the most interesting person in the room.
Sign up for InsideHook to get our best content delivered to your inbox every weekday. It’s free. And awesome.
Copyright © 2025 InsideHook. All rights reserved.

source

Scroll to Top