X owner Elon Musk’s reshare of a manipulated, faux campaign ad for Vice President Kamala Harris on social media last week raised alarms because he did not disclose that the clip, which parroted rightwing takes about the likely Democratic nominee, was a parody. But experts warn that the move illuminates AI’s potential to further embed distrust of election institutions in among voters ahead of the 2024 election.
Musk reposted the manipulated video of Harris to X on Friday night. The clip, which used much of the same imagery from her first presidential campaign ad, featured a new voice-over that appeared to be digitally altered to sound like Harris.
“I, Kamala Harris, am your Democrat candidate for president because Joe Biden finally exposed his senility at the debate,” the voice says in the clip. “I was selected because I am the ultimate diversity hire. I’m both a woman and a person of color, so if you criticize anything I say, you’re both sexist and racist.”
The video goes on to say Harris doesn’t know “the first thing about running the country” and dubs Biden the “ultimate deep state puppet,” while maintaining the vice president’s campaign branding and splicing in authentic clips from past Harris speaking engagements.
The viral video underscores the potential for AI-generated images, audio and videos to spread political misinformation even as they attempt to poke fun through parody or satire, an issue compounded in a highly contentious election year and by Americans’ waning trust in the nation’s electoral process. While Musk’s post is far from the first to spark controversy, it’s a sign of what role AI deepfakes can play — and how far they can reach — in sowing doubt as voters prepare for November, according to Mekela Panditharatne, senior counsel at the NYU Brennan Center for Justice’s elections and government department.
“It emblematizes this period where we are seeing the burgeoning spread of generative AI and its impact on elections and the information environment,” she said, noting that similar deepfakes have become more common in the past year. While deepfakes predated the rise of generative AI, the latter allows for deepfakes to “spread in a way that is much faster,” while making it “easier and cheaper to produce more sophisticated looking and sounding content.”
Because the information environment is very polarized, recognizing parody in a clip like the one Musk shared can vary greatly from observer to observer, even for content that may seem “quite realistic but should be reasonably” understood as parody, Panditharatne said. Content that might be easily or quickly understood to be parody by one type of audience may not be perceived as such by a different audience, “especially if the content feeds into their preconceived notions of what a candidate is like” or their personal politics.
Oren Etzioni, a University of Washington professor emeritus of computer science and the founding CEO of the Allen Institute for AI, told Salon that the Harris deepfake ad, “to the naked eye,” was “surprisingly well done.”
While frequent X users who saw the clip could click through to the original post and see the original poster disclose it was a parody, Etzioni said that with more than 130 million views, some users are bound to see Musk’s post, which only includes the caption “This is amazing” with a laughing crying emoji, and believe it to be “informative” if not “genuine.”
That dynamic creates a disinformation problem that’s four-pronged, he explained. First, more and more Americans consume part if not all of their news from social media, which allows “true fact” to live “side-by-side with falsehoods.” The second is in the way people “tend to be visual animals” and react in a “very visceral way” to what they see, and the third is in the ease with which individuals can create “doctored or fabricated images, video and audio that prey on that.”
“Now that combination means that anonymous users can create something that looks real and is fake, that looks compelling, but it’s not true,” said Etzioni, who also founded TrueMedia.org, a nonprofit that seeks to curb the proliferation of online deepfakes and disinformation by offering a free, online fact-checking tool. “Then when you couple that with the last nail in the coffin, which is having somebody with a wide audience and with some of his own credibility, like Elon Musk, sharing that without any warning, that’s a recipe for disaster.”
Generative AI deepfakes both inside and outside the U.S. have previously threatened to influence voters either through humor, misinformation or a combination of both, according to The Associated Press. Fake audio clips circulated in Slovakia in 2023 portrayed a candidate hatching a plan to rig an election and increase the price of beer days before the vote, while a political action committee’s 2022 satirical ad spliced a Louisiana mayoral candidate’s face onto an actor who portrayed him as an underachieving high schooler.
Harris’ former running mate has also been a frequent victim of the technology. Earlier this year, a deepfake robocall using Biden’s voice urged voters in New Hampshire to skip the state’s Democratic primary, and just last week, a deepfake video of his campaign withdrawal announcement appeared to show the president cursing out his critics.
“The potential spread of content that disruptively depicts candidates or officials in ways that manipulate people’s perception of those candidates and officials, that undermine the election process itself — that is a very troubling prospect,” Panditharatne said, explaining that the risk for viewers of the content to be misled is greatest in the period immediately after the deepfake goes live.
In addition to creating potential misrepresentations of officials, malevolent actors can also exploit generative AI to create deepfakes to bolster vote suppression by way of deceptive depictions of election officials and crises at polling sites, and manufactured obstacles to voting among other examples, which could further erode the nation’s trust in electoral institutions, she said.
“That growing lack of trust in institutions and authoritative sources and information is generally a problem for elections and democracy, and the advent of generative AI and deepfakes exacerbate that issue,” Panditharatne argued.
Etzioni and Panditharatne said they encourage voters to view content that evokes an emotional response with an appropriately critical lens, use or reference a credible fact-checker to verify the accuracy (or lack thereof) of the content they encounter, and engage authoritative sources of information like legitimate news media and official election office websites, in order to stay abreast of accurate information ahead of November.
While Congress has yet to pass legislation regulating AI as it’s used in politics, more than one-third of state legislatures have authorized laws of their own around the use of AI in campaigns and elections, according to the National Conference of State Legislatures. These laws, Panditharatne said, reflect First Amendment protections over parody and satire, while working to curb potential election disinformation.
To aid in slowing the spread, Etzioni also recommends tagging political videos that have been manipulated by AI as such, which would allow viewers to engage with altered media from a more informed perspective. According to the AP, social media companies, like YouTube, have created policies with respect to sharing generated and manipulated media on their platforms.
X also boasts a policy on manipulated media barring users from sharing “synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm (‘misleading media’),” with some exceptions made for satire and memes so long as they do not cause “significant confusion about the authenticity of the media.”
Some users questioned whether Musk in making the post violated his platform’s own policy, while participants in X’s “community notes” feature, which works to contextualize if not correct posts, suggested labelling Musk’s repost. As of Tuesday, however, no label has been added.
The chance that generative AI will have an impact on the election — and the resources that adversaries or malevolent actors will have to create this kind of content — grows as election day nears, Etzioni warned.
“The closer the election is, the more effort they will put into it,” he said. “I think that we need to be both vigilant but also prepared.”
Truthout Is Preparing to Meet Trump’s Agenda With Resistance at Every Turn
Dear Truthout Community,
If you feel rage, despondency, confusion and deep fear today, you are not alone. We’re feeling it too. We are heartsick. Facing down Trump’s fascist agenda, we are desperately worried about the most vulnerable people among us, including our loved ones and everyone in the Truthout community, and our minds are racing a million miles a minute to try to map out all that needs to be done.
We must give ourselves space to grieve and feel our fear, feel our rage, and keep in the forefront of our mind the stark truth that millions of real human lives are on the line. And simultaneously, we’ve got to get to work, take stock of our resources, and prepare to throw ourselves full force into the movement.
Journalism is a linchpin of that movement. Even as we are reeling, we’re summoning up all the energy we can to face down what’s coming, because we know that one of the sharpest weapons against fascism is publishing the truth.
There are many terrifying planks to the Trump agenda, and we plan to devote ourselves to reporting thoroughly on each one and, crucially, covering the movements resisting them. We also recognize that Trump is a dire threat to journalism itself, and that we must take this seriously from the outset.
After the election, the four of us sat down to have some hard but necessary conversations about Truthout under a Trump presidency. How would we defend our publication from an avalanche of far right lawsuits that seek to bankrupt us? How would we keep our reporters safe if they need to cover outbreaks of political violence, or if they are targeted by authorities? How will we urgently produce the practical analysis, tools and movement coverage that you need right now — breaking through our normal routines to meet a terrifying moment in ways that best serve you?
It will be a tough, scary four years to produce social justice-driven journalism. We need to deliver news, strategy, liberatory ideas, tools and movement-sparking solutions with a force that we never have had to before. And at the same time, we desperately need to protect our ability to do so.
We know this is such a painful moment and donations may understandably be the last thing on your mind. But we must ask for your support, which is needed in a new and urgent way.
We promise we will kick into an even higher gear to give you truthful news that cuts against the disinformation and vitriol and hate and violence. We promise to publish analyses that will serve the needs of the movements we all rely on to survive the next four years, and even build for the future. We promise to be responsive, to recognize you as members of our community with a vital stake and voice in this work.
Please dig deep if you can, but a donation of any amount will be a truly meaningful and tangible action in this cataclysmic historical moment.
We’re with you. Let’s do all we can to move forward together.
With love, rage, and solidarity,
Maya, Negin, Saima, and Ziggy