The Leading Edge: AI and Disinformation in the 2024 Election
By Philip Athey , Editor
Goodbye “fake news,” hello “it’s just artificial intelligence.”
On Sunday, former President Trump falsely claimed that Vice President Kamala Harris’s campaign used AI to manipulate photos and conjure up crowds greeting her as she got off her plane in Detroit’s Wayne County Metropolitan Airport, making attendance look much larger than it actually was.
Local news site MLive estimated that 15,000 people showed up to the rally, and analyses of the photo by CBS and other news organizations have shown absolutely no evidence it was manipulated by AI.
Despite that, Trump supporters took the former president’s claim as fact and spent days claiming repeatedly on social media that Harris’s campaign relied on AI to make her crowds look bigger than Trump’s.
There is nothing new about Trump lying. There is also nothing new about his supporters taking his falsehoods as gospel and rejecting the truth, no matter how much evidence is presented. During his 2016 run, Trump popularized the term “fake news” to discredit any report that was even mildly critical of him. Eventually, Trump and his advisers’ effort to discredit and invalidate the 2020 election results led to an insurrection attempt on Jan. 6, 2021. That resulted in charges against more than 1,200 Trump supporters, ABC reported in January. Trump himself has been indicted on four federal charges and one state indictment related to the election.
Even before Trump, disinformation, misinformation, and candidates telling their supporters to ignore critical press have been part of presidential elections since George Washington. But the AI claim is new, and perhaps is a glimpse into the future of election disinformation. From now on, if candidates or supporters see something they don’t like, they can spin it as AI-generated propaganda that should be ignored.
The claim can be made even more strongly because generative AI has already been deployed in this election in an attempt to dissuade voters.
Ahead of the New Hampshire primary in January, political consultant Steve Kramer sent out robocalls that used AI to mimic President Biden’s voice and urge voters to stay home. Kramer now faces a $6 million fine for the stunt. More recently, Elon Musk used AI to manipulate a Harris commercial to mock the vice president as a parody. In addition, the Russian government has used generative AI to fuel bots on X, the site formerly known as Twitter, to spread disinformation.
The Russian bot farm was shut down by the Justice Department in July. But given how cheap and easy it is to set up these disinformation bots, this will almost certainly not be the last attempt by foreign governments or even private individuals to spread disinformation and discourse in the run up to the election.
The mix of real incidents where AI is used to spread falsehoods–along with claims from candidates and their supporters that negative moments caught on tape are just AI fabrications–will likely blow more fog into an already murky information environment that voters will have to navigate this fall and in future elections.
PolicyView: AI is a twice-monthly intelligence report from National Journal that provides a comprehensive view of AI legislation at the state and federal levels. We track what’s gaining momentum in specific areas of the country, what industries are most likely to be affected, and which lawmakers and influencers are driving the conversation.
To learn more and request the latest report, visit policyviewresearch.com.