Harnessing Artificial Intelligence to Combat Disinformation in the MENA Region
3 Questions with Theodore Caponis, Project leader of Dalil
In the ever-evolving landscape of digital information, Dalil, one of the projects selected at the 2023 Forum to be part of the Scale-up program, has emerged as a pivotal force in combating disinformation across the MENA region. In this interview with Theodore Caponis, we dive into Dalil's innovative approach to fact-checking.
How does Dalil contribute to addressing the challenges of disinformation in the digital age, and how do you envision its impact towards a more informed and resilient public discourse in the MENA region?
Theodore Caponis: Over the past 20 years, the rise of social media and the advancement in smartphone technology have accelerated the globalisation of citizen journalism, for better and for worse. Today, the rise of AI generated content requires a similar explosion of citizen fact-checking. At Dalil, we are tackling this growing complexity of disinformation by making it easier, faster, and more intuitive to track the news and detect deception. Having mainly worked with fact-checkers from the MENA region so far, we are expanding the platform in a way it can also serve the media ecosystem as whole, from content producers to consumers as well as policymakers.
Recommended by LinkedIn
Current content moderation on social media is ill-suited to fact check in Arabic as most tools and algorithms cater to English content. How does artificial intelligence help you bridge this gap? Can you share examples of how AI has enhanced the efficiency and impact of the platform in combating disinformation in Arabic?
TC: It is true that current AI models’ strong suite is English, so we have developed our own models and we trained them on the Arabic language. By “understanding” Arabic, they help accelerate monitoring through the scraping of news outlets and the clustering of content by topic for a comprehensive view of trending issues. They also enhance verification capabilities, by enabling users to measure the level of subjectivity within a given text, and the degree to which it deploys rhetorical devices aiming to influence readers. The natural next step here will be moving towards language agnostic functionalities. So far, feedback has been encouraging, and the user base of professional fact-checkers and media workers has grown to 200+ in under a year. We know that we have still got a long way to go; what we also know is that we can get there – and attending the Paris Peace Forum was hugely beneficial in this regard.
You released a report about news consumption in Lebanon in the recent context of Gaza. Can you tell us more how this media coverage is being consumed in Lebanon and how fact checking can be helpful in that context?
TC: The information disorder we’ve seen since October 7 is a typical characteristic of war. Today, it is just easier to create, fall for and share disinformation. The Lebanese people are closely following the violence in Gaza – probably more so than the rest of the world, as it also hits home. Our survey showed that they trust social media coverage most, while having very little to no fact-checking awareness, which is problematic. In this context, fact-checkers are an essential frontline defence, and they need all the available support. But information disorder concerns everyone, and each has a role to play. Our advice here: if something seems unbelievable, chances are that it is. Be critical, check before you share, and familiarize yourself with the range of analysis tools that are available online. Creating your account on Dalil would be a good first step in this respect.
Theodore Caponis is Founding Partner of Siren Analytics, project leader of the Disinformation Analysis and Listening Lab (Dalil), one of the 2024 Paris Peace Forum Scale-up projects