The Digital Briefing by Digitalis

The Digital Briefing by Digitalis

Introduction

Welcome to December’s Digital Briefing, your guide to the latest developments in digital risk and online reputation management.   

We kick off this month with how AI is transforming online search. AI advancements this year have revolutionised the way we search the web. But with increased competition, and AI regulation looming, what’s next for the search landscape? We share our predictions in our first article.   

Next, we move onto the big theme of 2024: elections. By the time this year comes to an end, more than 50 countries worldwide will have held leadership contests. No wonder it’s been dubbed the “Year of Democracy”. Unsurprisingly, AI has made a mark in many of these political races, to varying degrees. We examine how and explore what can be done to mitigate the threat AI poses to the democratic process.  

In our third article, we reveal how large tech companies - social media giants, most notably - are using vast quantities of personal information to build their own AI models. This is naturally prompting serious concerns around privacy and leaving some people uneasy about how companies are using their data. We consider where different jurisdictions stand on this subject and what you can do to protect yourself.  

We then turn our attention to social media and the great "X-odus". Millions of users abandoned the platform formerly known as Twitter this year, following Elon Musk’s contentious takeover. However, X isn’t the only platform facing an uncertain future. We explore the other factors set to disrupt the social media landscape and what the future may hold for social networks.   

Finally, with the year coming to an end, we share some of our favourite tech books, podcasts and documentaries of 2024. Covering subjects from Elon Musk’s dramatic tenure at X to AI-generated deepfakes, there’s something for everyone, so do take a look.  

Wishing you all a happy and peaceful festive season. We look forward to working with you in 2025.     

Dave King

CEO




As AI technology advancements reshape how we find information online, the impact of generative AI in the search engine landscape shows no signs of slowing down. Within an environment of heightened regulation and increased competition, we analyse some of the key trends expected to affect online search in 2025.

By Jorge R. , Director, Search Strategy

Find out more



Over 50 countries held leadership elections in 2024, resulting in an unprecedented level of democratic engagement. Seemingly diverse nations have, in reality, encountered similar challenges related to the rise of AI tools, along with changing perceptions of a generational divide within the electorate. We delve into these key trends in more depth to understand the issues and identify where effective strategies may limit any negative impacts.

By Adam Ispahani , Senior Executive, Client Services

Find out more



Research shows increasing concern around how social media companies are using the extensive sources of personal information they have access to in order to develop and train their own AI models. Data privacy regulations vary across jurisdictions, as do the different platforms’ approaches to AI and the ability for users to opt out of how their data is used. We investigate the debate and offer recommendations for protecting your digital privacy.

By Alex D. , Associate, Digital Risk

Find out more



The takeover of Twitter, now X, is one of the most controversial business acquisitions of the 21st century, with active user numbers dramatically declining during this time. As governments across the world enforce stricter measures, a combination of voluntary and involuntary departures from various social media platforms threatens to disrupt the landscape. We explore the different perspectives on this evolving scenario.

By Tom Head , Senior Associate, Client Services

Find out more



With the rapid advancement of artificial intelligence and persistent threat of disinformation defining 2024, now might be the time to get ready for challenges that lie ahead by learning more about this dynamic technology landscape. We’ve reviewed what’s on offer from the huge selection of tech-focused books, podcasts, and documentaries available, and share our top recommendations with a synopsis of each.

By Eve Bolton , Senior Executive, Digital Risk

Find out more


By Eve Bolton , Senior Executive, Digital Risk

UNESCO warns of pressing need to train online influencers in verifying facts

  • In a new report addressing the spread of online misinformation, UNESCO has issued a warning that social media influencers require “urgent” training to fact-check their content before sharing it with their followers.
  • According to the report: Two-thirds of content creators fail to check the accuracy of their material. Four out of 10 creators cited the “popularity” of an online source as a key indicator of whether it was credible or not. Six out of 10 creators said they had not verified the accuracy of their information before sharing it with the audience. Creators generally did not use official sources such as government documents and websites.
  • The report was based on UNESCO’s survey findings of 500 content creators from 45 countries and territories, with the majority from Europe and Asia.
  • Discussing the threat of misinformation, the report said: “The low prevalence of fact-checking among content creators highlights their vulnerability to misinformation…”, this could have far-reaching consequences for public discourse and trust in media.
  • In light of the report’s findings, UNESCO has collaborated with the Knight Centre for Journalism in the Americas to offer a free online course in “how to be a trusted voice online”, that includes modules on both fact-checking and creating content about elections or crises.
  • In January 2024 as part of its Global Risks Report, the World Economic Forum stated that “misinformation and disinformation” would emerge as one of the most severe global risks of 2024 and 2025.
  • In November 2024, new research performed by Ofcom found that four in 10 UK adults had encountered misinformation or deepfake content in the past four weeks. 

 

OpenAI makes Sora publicly available to US users

  • OpenAI has made Sora, its artificial intelligence video generator, publicly available to ChatGPT Pro and Plus users in the US.
  • The tech company first released Sora in February, but it was only accessible to select artists, filmmakers, and safety testers. Unlike traditional AI programs that produce written responses, Sora creates high quality videos based on a user’s text input.
  • OpenAI states that it hopes “this early version of Sora will enable people everywhere to explore new forms of creativity, tell their stories, and push the boundaries of what’s possible with video storytelling”.
  • According to The Guardian, OpenAI is still working through compliance requirements with the Online Safety Act in the UK and both the Digital Services Act and GDPR in the EU, so it could be a while before Sora reaches the UK and EU.
  • Despite the creative possibilities presented by Sora, some critics have warned that the easy-to-use technology could be misused for disinformation and deepfakes.
  • In the past year alone, numerous politicians and celebrities have had their image distorted by artificial intelligence, a trend that we can expect to continue well into 2025 and beyond.

 

The Strava security problem still looms large

  • Back in 2018, it was widely reported that a student had identified how users of sports social media app Strava had mapped US army bases in Syria and Afghanistan, exposing hugely sensitive information that could be used to cause real harm.
  • Although the news story made headlines when it was shared all those years ago, it appears that important lessons have not really been learned.
  • In recent weeks, French newspaper Le Monde has released a series of investigations that have highlighted how the popular exercise app can be used to access sensitive location information about some of the world’s most important leaders.
  • In an investigation focused on Emmanuel Macron’s bodyguards, the Security Group for the Presidency of the Republic (GSPR), Le Monde found that Macron’s security team had published sensitive location information on the platform, which bad actors could use to access information about him such as hotels or meeting locations.
  • In one investigation, Le Monde traced the Strava movements of GSPR profiles to determine that he spent a weekend in the Normandy seaside resort of Honfleur in 2021, a trip that was supposed to be private and not listed on the President’s official agenda.

 

Fighting scammers with AI

  • In 2024, scammers have been using artificial intelligence as a powerful tool to craft all sorts of elaborate scams that prey on unsuspecting victims, but what would happen if the “good guys” decided to use AI to catch out these unscrupulous folk?
  • In recent weeks, the telecommunications company O2 has announced their new fraud-fighting tactic; an AI granny named Daisy. Trained using real scam content, the AI granny has been designed to fool scammers into thinking they’ve found the perfect target and waste as much of their time as possible.  
  • In one instance, according to O2, three phone scammers teamed up on a call that lasted nearly an hour, trying to get Daisy to type www. into a web browser.
  • According to the phone company Hiya, tens of millions of scam calls rocketed around the world every day last year, with more than USD $1 trillion stolen from unsuspecting victims.

 

Meta reflects on the year of elections

  • Meta, the parent company of Facebook, Instagram, and WhatsApp, has shared a report about what it witnessed on its platforms during 2024’s global elections.
  • In the report, Nick Clegg, Meta's president of global affairs, shared that the company had taken down around O2 new covert influence operations including: A Russian network that used dozens of Facebook accounts and fake news websites to target individuals in Georgia, Armenia, and Azerbaijan. A Russia-based operation that leveraged AI to create fake news sites mimicking brands like Fox News and The Telegraph, aiming to undermine Western support for Ukraine. It also used Francophone fake news platforms to promote Russia’s role in Africa while criticising France’s.    
  • Clegg commented on the role of artificial intelligence in influencing voters, noting it was “striking” how little AI had been used to mislead voters during a year marked by significant global elections.
  • He also disclosed that, in the month leading up to the US Election, Meta blocked 590,000 requests to generate AI-created images depicting political figures such as Kamala Harris, Donald Trump, JD Vance, and Joe Biden.
  • Warning against complacency, Clegg said that the relatively low impact of fakery using generative AI to manipulate video, voices, and photos was “very likely to change”, noting that AI tools will become more and more prevalent during the year ahead.

 


To sign up to the next Digital Briefing, please email eve.bolton@digitalis.com or subscribe via LinkedIn here.

Follow us on LinkedIn to stay up to date with the latest developments across online reputation and digital risk.


Copyright © 2024, Digitalis, all rights reserved.

Our mailing address is:

16 Berkeley Street

London

W1J 8DZ


To view or add a comment, sign in

Explore topics