Weaponised Disinformation Threatens Democratic Values
ISF Chief Executive, Steve Durbin, featured in Information Week

Weaponised Disinformation Threatens Democratic Values

Fortifying democracy against AI-driven disinformation will involve public awareness campaigns and childhood education.

The headline blares, “China is targeting US voters and Taiwan with AI-powered disinformation” and spending billions doing so. Artificial intelligence is not just a symbol of innovation but a source of digital threat, especially through the spread of misinformation and disinformation. (These terms are often used interchangeably with "misinformation" referring to false information shared online unintentionally.) Below are insights from my interview with Brian Lord , CEO of Protection Group International , a British firm specialising in risk management since 2013. We discussed looming AI-related security threats which may potentially bear negatively on democratic values, and the urgent call for taking proactive measures to thwart these risks. As we navigate these challenges it's crucial to explore the multifaceted role AI has in shaping public opinion and policies, and the necessity for a vigilant and educated public required for protecting democratic principles.   

Changing Dynamics of Cybersecurity Threats

The digital world is teeming with increasing cyber threats that use AI technology to create and spread disinformation. The misuse (and abuse) of AI extends beyond manipulating facts and exploiting network and software vulnerabilities, causing contemporary issues to become even more divisive. AI-fuelled cyber manipulation is especially insidious as it subtly alters public discourse, with the power to influence elections and policymaking, yet under the guise of appearing sincere and genuinely truthful. The power of AI-driven disinformation lies in its ability to mimic human behaviour, creating images and messages that resonate deeply but distort the truth.

Societal Implications of Mis- and Disinformation

With significant democratic elections scheduled around the world, it's crucial to explore the societal responsibility for safeguarding electoral integrity. While direct attacks on voting systems may cause temporary disruptions, their rarity and limited effects pale in comparison to the threats posed by AI-powered disinformation. The real potency of AI lies in its capacity to create and spread false narratives that sway public opinion, often exacerbating existing societal divisions. Disinformation campaigns don't emerge from nowhere, rather they amplify highly controversial topics like immigration, where public sentiment is already polarised.

By exploiting these divisive issues, such malicious operations erode trust in the media and government, undermining democratic institutions. Fundamentally, the goal of these cybercriminal adversaries is to distort public perception and discourse, ultimately influencing electoral outcomes more enduringly than a wave of direct hacking attempts could ever achieve. The consequences of electoral disinformation campaigns extend beyond the creation of false narratives; it breeds wider societal discord, with far-reaching consequences.

AI-driven operations contribute to societal tensions causing rifts that could prove difficult to heal. It is crucial for everyone with a stake in this issue – from policymakers and technology firms to educators – to take a deliberate stance to safeguard against these attack vectors and to strengthen societal resilience to withstand the creeping threat of digital falsehoods.

Strategies to Combat AI-Fueled Disinformation

Having a multifaceted strategy is crucial for safeguarding the authenticity of democratic processes. The strategies below emphasise the need for significant collaboration across multiple sectors to mitigate these threats and combat the proliferation of widespread disinformation.

  1. Educate on cyber awareness from a young age: Incorporating cyber awareness into educational curricula is not a far-fetched idea considering the drastic rise in AI-driven intrusions. This proactive educational strategy should go beyond the basics of digital literacy and include critical thinking skills that question the validity and biases present in digital content. Foster an environment where questioning and verifying online information become standard practice. Training young minds to navigate the minefield of digital content will enable future generations to distinguish between reliable information and potential falsehood.
  2. Public campaigns to boost critical thinking and media literacy: Societies should consider running public awareness campaigns focused on enhancing media literacy for all age groups. By encouraging a deep understanding of how information is created and shared online, individuals can better assess the credibility of sources and the content they engage with. This strategy plays a vital role in educating the public and creating an informed electorate so they can resist the influence of misleading data that may be built on lies.
  3. Collaboration among governments, tech companies, and civil society: Addressing the sprawl of AI-driven disinformation requires collaborative approaches to develop stronger technological solutions and effective regulatory frameworks. Partnerships involving governments, tech companies and civil society can promote the exchange of best practices and advancements in AI management. These collaborations are vital for establishing resilient systems that not only identify and counter mis- and disinformation but also uphold freedom of speech and the dissemination of authentic information.

Reflecting on the Importance of Preserving Principles

The complex issue of AI-driven disinformation poses a threat to the core foundations of democratic societies. From the dual threat posed by cyberattacks and the widespread dissemination of false information, education emerges as an adaptable and dynamic defence strategy, empowering individuals with the scepticism and insights needed to navigate digital content more intelligently.

Policymakers, educators, and tech experts should prioritise investments in practical societal solutions that uphold democratic values, address current threats, and anticipate the potential for future AI risks while still in their early stages. Regulations should be implemented to hold platforms accountable for what they generate. By promoting awareness and fostering collaboration we can strengthen our democracies against the profound impact of AI-facilitated information.


Learn more about the scope of AI's influence on electoral integrity in the ISF Podcast below.


Learn more about the scope of AI's influence on electoral integrity in Steve and Brian's podcast episode: https://bit.ly/4dcDa2a

To view or add a comment, sign in

More articles by Information Security Forum

Insights from the community

Others also viewed

Explore topics