The Perils of Artificial Intelligence in Media: Ethical Challenges Ahead for Deepfakes
Elon baby Yoda

The Perils of Artificial Intelligence in Media: Ethical Challenges Ahead for Deepfakes

For the past 4 years, we have lived in an age of disinformation. The most senior Trump administration officials have repeated a constant stream of lies that have only been amplified by the ubiquity of social media, and parroted by zealous supporters far and wide. Even when video and photographic evidence are presented to clearly refute the lies, they are quickly dismissed as fake, merely a propaganda tool of ‘the opposition’ to brainwash you into ‘their lies.’ Needless to say, this rhetoric has only perpetuated a ‘me vs. you’, ‘us vs. them’ mindset that has sewn deep wounds in our nation, manifesting in the recent January 6th Capitol riot. 

Before now, never in American history have ‘alternative facts’ been a part of serious political discourse. Senior officials have violated not only their personal integrity in spewing disinformation, but have gone as far as undermining the verity of the third-party press itself, particularly those that voice critical opinions. Casting doubt and dispersion on critics is a classic move out of the playbook of dictators -- destroy the free press and the very bedrock of democracy crumbles underneath.

Now imagine a worst case scenario: if supporters of Donald Trump, post-presidency, used deepfake technology to produce videos with his likeness inciting violence in city halls and state houses across the nation. The same tactics could easily be employed by malicious foreign adversaries, with no limitations on which officials or business leaders could be impersonated -- President Biden, his cabinet, CEOs of major technology companies, the list goes on. Deepfake technology is now so advanced that video and voice can be created of any individual that is largely indistinguishable from their real-life, genuine self. 

No alt text provided for this image

Sway helps transform simple motions into TV-quality dances and stunts. Above is an imitation of Sam Elliott’s dance in Doritos’ Super Bowl ad (Credit: Humen.ai)

For the most part, deepfake technology has enhanced the everyday consumer experience. Mobile apps like Reface and Sway are great tools for Tik Tok influencers, advertisers, and entertainment producers to produce deepfake videos that jazz up their content and engage younger audiences (disclosure: Signia is an investor in Sway). Replika.ai and Resemble.ai enable voice recreations that can be used in games, advertisements, and more. But the open source, open access to this technology has a dark side that can no longer be ignored. Some applications have already crossed ethical boundaries, such as DeepNude, an eerily realistic nude deepfake app that drew heavy criticism and shutdown within hours of launching. 

Soon, it is quite possible that consumers and citizens can no longer accurately discern between what is real and what is fake -- where one can truly no longer believe what the eyes and ears feed them. And in our worst case scenario: will supporters react any differently to a deepfake Donald Trump compared to his genuine self? The answer: not much, unfortunately.

It is not only difficult for human eyes and ears to detect deepfakes, but even difficult to do so for the most complex technologies built by the world’s leading research engineers. This is because deepfakes are created via two separate neural networks that work against each other (one generating the fake image, one comparing it against real images) to continuously improve the quality and accuracy of the intended deepfake. The underlying system is known as a general adversarial network (GAN) and emerged less than 7 years ago, hence why progress made on cracking detection has yet to be significant. 

Nevertheless, leading technologists have been hard at work. In September 2020, Microsoft announced a duo of new technologies to support deepfake detection -- one that generates digital hashes attached to authentic images or videos, and another that detects and matches them to ensure authenticity. Facebook launched a deepfake detection challenge in June 2020, inviting top researchers and tech amateurs alike to try their hand at building a robust deepfake detection model. While even the winning model reported only 82% accuracy when tested against a public dataset, it is initiatives like these that push the needle ever so slightly in this grand technological challenge. No perfect solution has been found as of 2021; but it is at least small comfort to know that the smartest engineers are working around the clock to find one. And I know that many investors are actively searching for an investment in this space.

Today, the dispersion of ‘alternative facts’ and accusations of ‘fake news’ is clear and present. And the technologies to twist those ‘facts’ and ‘news’ into seemingly genuine, believable sources of information are readily available to anyone. It won’t be long before true ‘deepfake news’ threatens to undermine the tenets of free press and American democracy; the technology community must sound the warning bells and act before it is far too late. 

Special thanks to Signia's Kevin Wu for his help on co-authoring this article


Sunny, gracias por compartir!

Like
Reply

To view or add a comment, sign in

More articles by Sunny Dhillon

Insights from the community

Others also viewed

Explore topics