Fake News, Real Problems: Deepfakes and Their Impact on Business Integrity
Deepfakes have been around for a few years now, but with the latest innovations such as generative AI, access to the technologies that enable deepfakes is almost mainstream, to the point OpenAI has decided not to release to the general public their latest voice AI technology as it is supposedly able to generate like-for-like audio after listening to only 15 seconds of original voice recording. Today we will delve into this topic, as always focussing on what it means for corporates and how to best prepare to mitigate risk.
Focus On: Deepfakes In the Corporate World
Deepfakes, which are created using advanced artificial intelligence techniques such as Generative Adversarial Networks (GANs), can manipulate audio-visual content to create highly convincing depictions of individuals saying or doing things they never actually said or did. As this technology becomes more sophisticated and accessible, it poses a range of risks to businesses, including the spread of disinformation, fraud, and damage to reputation.
The potential impact of deepfakes on businesses cannot be overstated. Consider, for example, a scenario in which a deepfake video surfaces, appearing to show a company's CEO engaging in unethical or illegal behavior. Even if the video is eventually proven to be a deepfake, the immediate consequences—such as a drop in stock prices or a loss of consumer trust—could be severe and long-lasting. On a potentially even bigger scale, LLMs such as OpenAI ChatGPT could be used to generate copy for mass-emailings lowering the barriers to entry into phishing frauds.
Law enforcement agencies are actively working to address the challenges posed by deepfakes, developing advanced detection tools that can identify inconsistencies and anomalies in digital content. However, the responsibility for mitigating the risks associated with deepfakes does not lie solely with law enforcement. Businesses must also take steps to protect themselves and their customers.
One key area of focus for businesses should be education and awareness. It is essential that employees at all levels, particularly those in roles such as communications, legal, and cybersecurity, are informed about the nature of deepfakes and the potential threats they pose. By fostering a culture of digital literacy and critical thinking, companies can improve their ability to identify and respond to deepfake-related incidents. To this extent, I particularly like unannounced test emails where employees are tested on their ability to flag phishing attempts.
In addition to education, businesses should invest in robust authentication and verification technologies, especially for sensitive communications and transactions, exploring emerging solutions specifically designed to detect and counter deepfakes. Voice, for instance, is no longer a secure double authentication option as mentioned in the intro.
Collaboration with specialized technology partners can also play a vital role in a company's deepfake defense strategy. By working with firms at the forefront of AI and machine learning research, businesses can gain access to cutting-edge detection tools and insights that can be customized to their specific needs and risk profiles. It is key to develop high quality relationships with these solution providers, to gain early and/or exclusive access to their best tech. Back to the example on voice, OpenAI is releasing its latest voice tech only to high-profile, trusted enterprise customers.
Recommended by LinkedIn
It is also important to review and update corporate crisis management and response plans to account for potential deepfake scenarios. This includes conducting simulations and drills to test the effectiveness of existing protocols and identify areas for improvement.
On top of internal measures, businesses also have an opportunity to contribute to the development of industry standards and best practices related to deepfakes. By actively participating in relevant forums and advocacy efforts, companies can help shape policies and regulations that promote responsible innovation while mitigating the risks associated with malicious applications of the technology.
Spotlight On: OpenRouter
Trying multiple generative AI models can be daunting, especially without a bit of technical background or if you simply do not want to create top up wallets on multiple sites to access the paid-for models. OpenRouter solves this, by aggregating more than 100 LLMs: some free some paid for. You just need to top up one wallet and then pay as you go!
Follow me
That's all for this week. To keep up with the latest in generative AI and its relevance to your digital transformation programs, follow me on LinkedIn or subscribe to this newsletter.
Disclaimer: The views and opinions expressed in Chronicles of Change and on my social media accounts are my own and do not necessarily reflect the official policy or position of S&P Global.
Great insights on safeguarding reputation in the age of generative AI and deepfakes, Francesco Federico
Senior project Manager
8moFrancesco, thanks for sharing!