AI And Conspiracy Theories: Can Artificial Intelligence Help Change Minds?

AI And Conspiracy Theories: Can Artificial Intelligence Help Change Minds?

Thank you for reading my latest article AI And Conspiracy Theories: Can Artificial Intelligence Help Change Minds? Here at LinkedIn and at Forbes I regularly write about management and technology trends.

To read my future articles simply join my network by clicking 'Follow'. Also feel free to connect with me via Twitter, Facebook, Instagram, Podcast or YouTube.


Belief in conspiracy theories is more than just a fringe phenomenon. From COVID-19 hoaxes to political cover-ups, conspiracy thinking has infiltrated every corner of society. Despite the ease of fact-checking in the digital age, many continue to hold fast to these beliefs. What if artificial intelligence could change that? Recent research published in the prestigious Science journal suggests AI might just be the key to reducing harmful conspiracy thinking.

Generative AI models—like the GPT series—have shown surprising effectiveness in engaging conspiracy believers in tailored dialogues. By directly addressing the evidence people cite for their beliefs, AI is able to gradually chip away at even the most entrenched viewpoints. The question is, how does this work, and can AI really help society combat misinformation in a sustainable way?

A Persistent Problem

The prevalence of conspiracy theories is concerning, especially given their real-world consequences. From the insurrection on January 6th to COVID-19 denialism, these beliefs have not only threatened public safety but also undermined democracy itself. Traditionally, psychologists have argued that conspiracy beliefs fulfill psychological needs—providing believers with a sense of control or uniqueness—and are resistant to factual counterarguments.

But what if the problem isn’t so much the psychology of believers as the way facts are presented to them? Could it be that people cling to conspiracies simply because they’ve never encountered evidence in a way that truly resonates with them? This new research suggests that AI might hold the answer.

The Power of AI Dialogues

Research led by Thomas Costello, Gordon Pennycook, and David Rand tested a novel intervention where AI engaged 2,190 participants, each of whom believed in a conspiracy theory, in real-time conversations. These participants were asked to explain the conspiracy they subscribed to, after which the AI engaged them in a three-round dialogue, challenging their views with fact-based counterarguments.

The results were astonishing: the AI-driven conversations reduced belief in these conspiracy theories by an average of 20%. Even more remarkably, this effect persisted for at least two months. The conversations were highly personalized, addressing the specific evidence the participants presented, which likely contributed to their success.

What’s more, this technique wasn’t just effective for “small” or “fringe” conspiracy theories. Participants who believed in widely circulated conspiracies—like those involving COVID-19, the 2020 U.S. election, or even longstanding beliefs about the Illuminati—were just as likely to reduce their belief after interacting with the AI.

Why AI Works Where Humans Struggle

What makes AI more persuasive than your average human fact-checker? For one, AI doesn’t get emotional or frustrated, which is often a barrier in human-to-human debates. When someone refuses to budge, our instinct is to either argue more aggressively or disengage. AI, on the other hand, can keep a cool and consistent tone, guiding the conversation with infinite patience.

Another advantage AI offers is its ability to generate bespoke responses. Every conspiracy believer has their own version of why they believe what they do, and one-size-fits-all debunking simply doesn’t work. AI can process and respond to the specific arguments each individual makes, making the dialogue feel more like a personal discussion rather than a lecture.

Crucially, the AI didn’t just debunk conspiracies blindly. It was able to differentiate between unsubstantiated claims and those rooted in truth. When participants mentioned real conspiracies (such as the CIA’s MK Ultra experiments), the AI didn’t attempt to discredit them, which likely boosted its credibility in other areas.

Sustained Impact: Changing Minds for the Long Term

The effectiveness of these AI dialogues isn’t just a fleeting win. The study showed that the reduction in conspiracy belief wasn’t a short-term effect that faded after a few days. In fact, participants showed no significant return to their prior levels of belief even two months later.

What’s even more impressive is the spillover effect. The dialogues focused on one specific conspiracy theory per person, but after interacting with the AI, participants also reduced their belief in other, unrelated conspiracies. This suggests that the intervention helped shift their overall worldview away from conspiratorial thinking.

Beyond just changing beliefs, the participants showed real behavioral shifts too. Many expressed increased intentions to ignore or argue against other conspiracy believers, and some were even less likely to participate in protests related to conspiracy theories. This behavioral change hints at AI’s potential to reduce the spread of misinformation in broader social contexts.

A Double-Edged Sword?

While the potential for AI to debunk misinformation is incredibly promising, the flip side of this technology must also be considered. AI can easily be trained to spread misinformation just as effectively as it can debunk it. Without careful guardrails, generative AI could be weaponized to reinforce false beliefs, making it essential for platforms and developers to enforce strict guidelines on how AI is used in public discourse.

That said, the positive implications of using AI as a tool for truth are profound. In a world where misinformation is rampant, AI could become an invaluable resource for journalists, educators, and fact-checkers. Instead of playing a game of “whack-a-mole” with every new conspiracy, we could see scalable solutions where AI systematically engages with misinformation on social media, in search engines, and beyond.

Optimism In A Post-Truth World?

The success of AI in changing minds should inspire optimism. For too long, it’s been assumed that once someone falls down the rabbit hole of conspiracy thinking, they are lost to reason. But this study shows that even entrenched conspiracy believers can be swayed with the right approach—an approach that is patient, personalized, and backed by evidence. AI might not single-handedly solve the misinformation crisis, but it certainly adds a powerful new tool to the fight.


About Bernard Marr

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity. He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations.

He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world. Bernard’s latest book is ‘Generative AI in Practice’.



Zulfiqar Ali

Ali Online Business Consultancy Services

2mo

Wonderful study and post Bernard

Like
Reply
Ben Samuel

Research, Analysis, & Feasibility checks of Startups (~ideas)

2mo

I have to disagree, AIs.are just tools expressing the bias of their developers, organization agendas, and in their training datasets. So… they should be considered the same way as an opinionated person. Their community safeguards standards are also biased, and usually put too much constraints on free speech (contradicting the First Amendment). They should forbid hate speech and related derogatory expressions.

Like
Reply
Jean Ng 🟢

AI Changemaker | AI Influencer Creator | Book Author | Promoting Inclusive RAI and Sustainable Growth | AI Course Facilitator

2mo

If AI systems are trained to understand the psychological factors that contribute to belief in conspiracy theories, I think they can change people's minds. AI could analyse individual users' beliefs and tailor debunking arguments to their specific concerns and biases.

Like
Reply
ANDRE NGUESSEU

ENERGY MANAGEMENT AND ENVIRONMENT CONSULTANT - ENGINEER

2mo

AI has the potential to significantly influence public perception and combat misinformation. By systematically addressing conspiracy theories on social media and search engines, AI can provide tailored, evidence-based responses that engage users effectively. This approach fosters a patient dialogue, allowing even deeply entrenched beliefs to be challenged. While AI alone won't eradicate misinformation, its ability to analyze vast amounts of data and identify patterns can empower journalists, educators, and fact-checkers to create informed narratives. Based on objective data,  AI offers a hopeful avenue for promoting truth and understanding in  an increasingly complex information landscape , a post-truth world, encouraging critical thinking and informed decision-making. As misinformation proliferates, AI can serve as a vital tool, offering scalable solutions to engage with false narratives across digital platforms.

Like
Reply

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics