Skip to content Skip to footer

Elon Musk Shared Fake Video of Kamala Harris. Experts Are Sounding the Alarm.

The chance that generative AI will have an impact on the election grows as Election Day nears, says one expert.

Tesla CEO Elon Musk (center) listens as Israeli Prime Minister Benjamin Netanyahu addresses a joint meeting of Congress in the chamber of the House of Representatives at the U.S. Capitol on July 24, 2024, in Washington, D.C.

X owner Elon Musk’s reshare of a manipulated, faux campaign ad for Vice President Kamala Harris on social media last week raised alarms because he did not disclose that the clip, which parroted rightwing takes about the likely Democratic nominee, was a parody. But experts warn that the move illuminates AI’s potential to further embed distrust of election institutions in among voters ahead of the 2024 election.

Musk reposted the manipulated video of Harris to X on Friday night. The clip, which used much of the same imagery from her first presidential campaign ad, featured a new voice-over that appeared to be digitally altered to sound like Harris.

“I, Kamala Harris, am your Democrat candidate for president because Joe Biden finally exposed his senility at the debate,” the voice says in the clip. “I was selected because I am the ultimate diversity hire. I’m both a woman and a person of color, so if you criticize anything I say, you’re both sexist and racist.”

The video goes on to say Harris doesn’t know “the first thing about running the country” and dubs Biden the “ultimate deep state puppet,” while maintaining the vice president’s campaign branding and splicing in authentic clips from past Harris speaking engagements.

The viral video underscores the potential for AI-generated images, audio and videos to spread political misinformation even as they attempt to poke fun through parody or satire, an issue compounded in a highly contentious election year and by Americans’ waning trust in the nation’s electoral process. While Musk’s post is far from the first to spark controversy, it’s a sign of what role AI deepfakes can play — and how far they can reach — in sowing doubt as voters prepare for November, according to Mekela Panditharatne, senior counsel at the NYU Brennan Center for Justice’s elections and government department.

“It emblematizes this period where we are seeing the burgeoning spread of generative AI and its impact on elections and the information environment,” she said, noting that similar deepfakes have become more common in the past year. While deepfakes predated the rise of generative AI, the latter allows for deepfakes to “spread in a way that is much faster,” while making it “easier and cheaper to produce more sophisticated looking and sounding content.”

Because the information environment is very polarized, recognizing parody in a clip like the one Musk shared can vary greatly from observer to observer, even for content that may seem “quite realistic but should be reasonably” understood as parody, Panditharatne said. Content that might be easily or quickly understood to be parody by one type of audience may not be perceived as such by a different audience, “especially if the content feeds into their preconceived notions of what a candidate is like” or their personal politics.

Oren Etzioni, a University of Washington professor emeritus of computer science and the founding CEO of the Allen Institute for AI, told Salon that the Harris deepfake ad, “to the naked eye,” was “surprisingly well done.”

While frequent X users who saw the clip could click through to the original post and see the original poster disclose it was a parody, Etzioni said that with more than 130 million views, some users are bound to see Musk’s post, which only includes the caption “This is amazing” with a laughing crying emoji, and believe it to be “informative” if not “genuine.”

That dynamic creates a disinformation problem that’s four-pronged, he explained. First, more and more Americans consume part if not all of their news from social media, which allows “true fact” to live “side-by-side with falsehoods.” The second is in the way people “tend to be visual animals” and react in a “very visceral way” to what they see, and the third is in the ease with which individuals can create “doctored or fabricated images, video and audio that prey on that.”

“Now that combination means that anonymous users can create something that looks real and is fake, that looks compelling, but it’s not true,” said Etzioni, who also founded TrueMedia.org, a nonprofit that seeks to curb the proliferation of online deepfakes and disinformation by offering a free, online fact-checking tool. “Then when you couple that with the last nail in the coffin, which is having somebody with a wide audience and with some of his own credibility, like Elon Musk, sharing that without any warning, that’s a recipe for disaster.”

Generative AI deepfakes both inside and outside the U.S. have previously threatened to influence voters either through humor, misinformation or a combination of both, according to The Associated Press. Fake audio clips circulated in Slovakia in 2023 portrayed a candidate hatching a plan to rig an election and increase the price of beer days before the vote, while a political action committee’s 2022 satirical ad spliced a Louisiana mayoral candidate’s face onto an actor who portrayed him as an underachieving high schooler.

Harris’ former running mate has also been a frequent victim of the technology. Earlier this year, a deepfake robocall using Biden’s voice urged voters in New Hampshire to skip the state’s Democratic primary, and just last week, a deepfake video of his campaign withdrawal announcement appeared to show the president cursing out his critics.

“The potential spread of content that disruptively depicts candidates or officials in ways that manipulate people’s perception of those candidates and officials, that undermine the election process itself — that is a very troubling prospect,” Panditharatne said, explaining that the risk for viewers of the content to be misled is greatest in the period immediately after the deepfake goes live.

In addition to creating potential misrepresentations of officials, malevolent actors can also exploit generative AI to create deepfakes to bolster vote suppression by way of deceptive depictions of election officials and crises at polling sites, and manufactured obstacles to voting among other examples, which could further erode the nation’s trust in electoral institutions, she said.

“That growing lack of trust in institutions and authoritative sources and information is generally a problem for elections and democracy, and the advent of generative AI and deepfakes exacerbate that issue,” Panditharatne argued.

Etzioni and Panditharatne said they encourage voters to view content that evokes an emotional response with an appropriately critical lens, use or reference a credible fact-checker to verify the accuracy (or lack thereof) of the content they encounter, and engage authoritative sources of information like legitimate news media and official election office websites, in order to stay abreast of accurate information ahead of November.

While Congress has yet to pass legislation regulating AI as it’s used in politics, more than one-third of state legislatures have authorized laws of their own around the use of AI in campaigns and elections, according to the National Conference of State Legislatures. These laws, Panditharatne said, reflect First Amendment protections over parody and satire, while working to curb potential election disinformation.

To aid in slowing the spread, Etzioni also recommends tagging political videos that have been manipulated by AI as such, which would allow viewers to engage with altered media from a more informed perspective. According to the AP, social media companies, like YouTube, have created policies with respect to sharing generated and manipulated media on their platforms.

X also boasts a policy on manipulated media barring users from sharing “synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm (‘misleading media’),” with some exceptions made for satire and memes so long as they do not cause “significant confusion about the authenticity of the media.”

Some users questioned whether Musk in making the post violated his platform’s own policy, while participants in X’s “community notes” feature, which works to contextualize if not correct posts, suggested labelling Musk’s repost. As of Tuesday, however, no label has been added.

The chance that generative AI will have an impact on the election — and the resources that adversaries or malevolent actors will have to create this kind of content — grows as election day nears, Etzioni warned.

“The closer the election is, the more effort they will put into it,” he said. “I think that we need to be both vigilant but also prepared.”

YOUR HELP urgently NEEDED

We missed our July fundraiser goal and need to keep fundraising to make up the difference.

The next few months are going to be pivotal and your tax-deductible donation will go far in helping us do our work.

Please do what you can today.