from opaque to open: can AI be used to counter polarization
By: Eline van Doorn
One of the biggest challenges on the internet is the fight against disinformation. The recent 2024 US presidential elections showed how algorithms can affect public trust, opinions, and polarization. The role of media, AI, and algorithms is under constant investigation. And it has changed election campaigns forever. Elon Musk used deepfake-faces in a video on X to support Donald Trump. Trump himself posted AI-generated photos of Taylor Swift who supposedly would vote for him. These are obvious examples of AI-generated information. Kamala Harris has also used AI, but for communication purposes such as automatic e-mails(1). Algorithms and AI are often blamed for pushing people into echo chambers(2). Can they also encourage balanced and open discussions? And if so, how?
US elections and algorithms
Algorithms are often built to boost user engagement, a strategy that drives significant revenue for platforms through targeted advertising(3). This often means they are used to show content that grabs attention, which can be sensational or emotional. This focus on sensational content can lead to problems. Algorithms frequently create echo chambers by showing users content that matches their beliefs, thereby reinforcing biases(4). For example, social media platforms like Facebook and Twitter amplify content aligned with users’ political preferences. A user engaging with conservative-leaning content would see increasingly similar or more extreme conservative perspectives, while the same occurs for users with liberal-leaning content. This reinforcement entrenches users in ideological bubbles, where opposing viewpoints are less likely to surface. Such echo chambers also amplify misinformation, such as false claims about mail-in.ballots, which can affect voter perceptions and trust in the electoral process.
Content that is polarizing or misleading spreads quickly because it draws more attention, sometimes more so than fact-based reporting. Polarizing content often evokes an emotional response, which drives higher engagement(5). AI makes it easy for people to create content that can worsen the problem, especially with deepfake technology now available to everyone. This can spread false information quickly and make it harder for people to tell what’s real from what’s fake. When people repeatedly see AI-generated content without realizing it’s artificial, it can damage trust in the media and make them more skeptical of other viewpoints.
The 2024 US elections highlighted the growing risks of influencing algorithms and the misuse of AI-generated content, underscoring vulnerabilities in the electoral process. In response, the US Election Assistance Commission (EAC) launched the 60-Second Security Series to address potential security issues efficiently. One of the critical areas tackled was the cybersecurity risks posed by AI. The series worked by providing concise, actionable guidance to election officials on identifying and mitigating AI-related threats, such as deepfakes, synthetic media, and the manipulation of public opinion through algorithmically amplified misinformation. It emphasized training officials to recognize AI-generated content, strengthening cybersecurity defenses, combating disinformation campaigns, and raising public awareness. By implementing these strategies, the series bolstered election security, enhanced public trust in the integrity of the electoral process and equipped officials with tools to respond to emerging technological threats(6).
diversity mode on
Luckily, there is an opportunity for media and tech firms to take forward-thinking steps. One solution is to create algorithms that show a range of content. This approach helps users see a broader variety of viewpoints, breaking them out of isolated information bubbles. For example, news platforms could build recommendation systems that include articles from across the political spectrum to offer balanced coverage. Ground News, Apple News, and BBC News are already doing this(7).
Adding fact-checking tools is another important step. Fact-checking works through validating published content with verified data and known facts. AI-driven tools could be built into content distribution to catch and correct false claims(8). These tools could cross-check information with trusted sources and alert users if content is flagged as possibly misleading. Facebook, for instance, is already doing this(9). Yet, it is important to note that the data that AI is fed with are often unknown to its users. This lack of clarity highlights certain limitations of this solution.
Recommended by LinkedIn
Giving users more control over the content they see can also help reduce polarization. Allowing people to adjust their settings to include more diverse perspectives empowers them to break out of echo chambers. An example could be a “Diversity Mode” in news apps, where users choose to see a wider range of content and viewpoints. Allsides and Ground News are major examples of this(10).
transparency is essential
Research shows that in a few years, 90% of all content online can be AI-generated(11). This asks for openness and flexibility in customizing your own settings. Regular reviews of algorithms can help ensure they do not favor extreme or sensational content by accident. Educating users about how these systems work can also boost public trust. Working together on research and pilot projects can help develop strategies that reduce polarization and improve media literacy. With transparency, user control, and teamwork, algorithms can become tools that promote unity and strengthen democratic values. Shall we?
This article is part of The Outside World Formula created by ftrprf
For organizations, it’s pivotal to thoroughly understand what is happening in society. We help companies generate comprehensive insights into societal change and its potential effects on their strategy and operations, both negative and positive. With actionable societal insights, courageous plans, and a can-do mentality, we connect the outside world to your company's strategy. For these outside-world insights, we use a rigorous methodology that includes data processing, quantitative and qualitative analysis, and a thorough review process to ensure the accuracy and consistency of our findings.
For more information, please contact info@ftrprf.com.