When pop star Taylor Swift posted her much-anticipated endorsement of Kamala Harris for president on Instagram this week, she explained that the other candidate, Donald Trump, pushed her to be crystal clear about how she plans to vote. Trump recently reposted on his social media site Truth Social doctored images falsely purporting to show Swift and blonde-haired fans endorsing his campaign instead.
As Swift noted, the images are part of a wave of online content known as “deepfakes” that are generated by emerging artificial intelligence, or AI, in order to manipulate public discourse and trick people into believing scams. Such content can be found across the internet, but deepfakes proliferate on networks with little moderation, such as Trump’s Truth Social or Elon Musk’s X.
“Recently I was made aware that AI of ‘me’ falsely endorsing Donald Trump’s presidential run was posted to his site,” Swift wrote in her endorsement of Harris. “It really conjured up my fears around AI, and the dangers of spreading misinformation.”
Thanks to rapid advances in AI technology, “deepfake” videos, images, voice recordings and other meme-ready content that falsely impersonate politicians and celebrities are now a fact of life. With Congress sharply divided along party lines and no laws specifically addressing hyper-realistic, AI-generated imposters and scams on the books, federal regulators and state lawmakers are scrambling to catch up.
Some of Musk’s own fans recently learned about the dangers of deepfakes the hard way after clicking on a fake but convincing video of the controversial tech billionaire promoting an investment scam. The video received tens of thousands of views on social media platforms such as Facebook, and victims reported losing thousands of dollars to unknown scammers.
Musk has embraced right-wing politics and conspiracy theories since buying the social media site formerly known as Twitter, and in late July he shared a parody deepfake himself, featuring an AI-generated parody video that purported to feature Vice President Kamala Harris’s voice to millions of followers. The video uses a voice-cloning tool to mimic the presidential candidate, revealing AI’s potential to mislead voters.
Some deepfakes appear to be harmless parodies, but political campaigns are already deploying AI-generated ads that look like actual candidates and events but are in fact fake, according to the watchdog group Public Citizen. Originally passed in 1972, the Federal Election Campaign Act prohibits candidates for public office from impersonating other candidates in public, and it’s also a crime to impersonate a political candidate in order to fundraise off their name. Such tactics are both fraudulent and would mislead voters.
So far, only the Federal Communications Commission (FCC) has taken decisive action on deepfakes at the federal level. In February, the commission clarified that federal law bans the use of generative AI to create human voices in robocalls in response to a New Hampshire call impersonating President Joe Biden during the Democratic primary. The robocall included Biden’s famous catchphrase “a bunch of malarky” and suggested that voting in the primary would preclude casting a vote in the November general election.
It was clear attempt at voter suppression that showcased the dangers that AI deepfakes can pose to democracy. An investigation into the New Hampshire robocall led back to a political consultant, a magician-turned-voice actor in New Orleans and two political consulting firms in Texas. One of those firms, Lingo Technologies, agreed to pay a $1 million civil penalty in a settlement with the FCC.
Under the leadership of FCC Chair Jessica Rosenworcel, the agency has also proposed new rules requiring disclosure to viewers when AI is used in political ads. However, the FCC rulemaking process can take months, if not years, and it’s unlikely the proposal will go into effect before the November elections.
Congress could pass legislation regulating AI and deepfake content, but the Republican-controlled House struggles to simply pass a budget, making congressional action unlikely anytime soon. State lawmakers in both parties have stepped up, and now New Hampshire is one of 40 states considering disclosure rules for AI-generated ads and content, according to Roll Call.
At the federal level, public interest watchdogs are now turning to the Federal Election Commission (FEC), which has the power to regulate campaigns and their communications with voters. However, the FEC has been largely at a standstill since 2006, when then-Senate Majority Leader Mitch McConnell realized that campaign finance enforcement could be stunted by appointing Republicans who won’t apply federal law to the evenly bipartisan FEC, according to Craig Holman, a Capitol Hill lobbyist for Public Citizen.
“Since then, enforcement and rulemaking by the FEC have ground largely to a halt,” Holman told Truthout in an email.
In response to a petition from Public Citizen, the FEC is proposing a new “interpretative rule” clarifying that existing law against fraudulently impersonating political candidates includes deepfakes generated by artificial intelligence. But watchdogs, including Public Citizen, say the bipartisan campaign regulators should go further and explicitly ban AI-generated media meant to fool voters with a convincing digital impersonation of political candidates.
“The compromise resolution on deepfakes is unique in that two Democratic commissioners have joined with two Republican commissioners in proposing a new regulation to enforce the existing campaign finance law,” Holman said. “However, in order to forge a bipartisan compromise, the resolution does not address the problem squarely and leaves open avenues for avoiding enforcement.”
“A proper resolution would have straightforwardly declared that misleading and harmful deepfakes produced and distributed by a candidate constitute fraudulent misrepresentation by that candidate,” Holman continued.
Public Citizen Co-President Robert Weissman was a bit more blunt, saying in a statement this week that the FEC appears to have “forgotten its purpose and mission, or perhaps its spine.” With deepfakes impacting elections across the world and popping up in the U.S. ahead of a crucial election in November, Weissman said the FEC should be working “actively to deter” political deepfakes, especially videos impersonating candidates and misinforming voters about their positions.
“The FEC’s new proposed ‘interpretive rule’ simply says that fraudulent misrepresentation law applies no matter what technology is used. That’s a resolution of a question that was never in doubt,” Weissman noted.
Still, Weissman said political consultants should know that political deepfakes distributed by candidates and campaigns violate the federal statue prohibiting fraudulent misrepresentation of another candidate or campaign. Whether the FEC will enforce even this rule remains to be seen.