Skip to content Skip to footer

As Election Day Nears, AI Deepfakes Are Spreading — and Facing Little Oversight

Watchdogs want more than the current patchwork of regulations governing artificial media created to mislead the public.

This illustration photo taken in Washington, D.C., on September 10, 2024, shows U.S. singer Taylor Swift's Instagram post endorsing Democratic presidential candidate Kamala Harris and addressing AI images shared by Donald Trump falsely claiming Swift's endorsement.

When pop star Taylor Swift posted her much-anticipated endorsement of Kamala Harris for president on Instagram this week, she explained that the other candidate, Donald Trump, pushed her to be crystal clear about how she plans to vote. Trump recently reposted on his social media site Truth Social doctored images falsely purporting to show Swift and blonde-haired fans endorsing his campaign instead.

As Swift noted, the images are part of a wave of online content known as “deepfakes” that are generated by emerging artificial intelligence, or AI, in order to manipulate public discourse and trick people into believing scams. Such content can be found across the internet, but deepfakes proliferate on networks with little moderation, such as Trump’s Truth Social or Elon Musk’s X.

“Recently I was made aware that AI of ‘me’ falsely endorsing Donald Trump’s presidential run was posted to his site,” Swift wrote in her endorsement of Harris. “It really conjured up my fears around AI, and the dangers of spreading misinformation.”

Thanks to rapid advances in AI technology, “deepfake” videos, images, voice recordings and other meme-ready content that falsely impersonate politicians and celebrities are now a fact of life. With Congress sharply divided along party lines and no laws specifically addressing hyper-realistic, AI-generated imposters and scams on the books, federal regulators and state lawmakers are scrambling to catch up.

Some of Musk’s own fans recently learned about the dangers of deepfakes the hard way after clicking on a fake but convincing video of the controversial tech billionaire promoting an investment scam. The video received tens of thousands of views on social media platforms such as Facebook, and victims reported losing thousands of dollars to unknown scammers.

Musk has embraced right-wing politics and conspiracy theories since buying the social media site formerly known as Twitter, and in late July he shared a parody deepfake himself, featuring an AI-generated parody video that purported to feature Vice President Kamala Harris’s voice to millions of followers. The video uses a voice-cloning tool to mimic the presidential candidate, revealing AI’s potential to mislead voters.

Some deepfakes appear to be harmless parodies, but political campaigns are already deploying AI-generated ads that look like actual candidates and events but are in fact fake, according to the watchdog group Public Citizen. Originally passed in 1972, the Federal Election Campaign Act prohibits candidates for public office from impersonating other candidates in public, and it’s also a crime to impersonate a political candidate in order to fundraise off their name. Such tactics are both fraudulent and would mislead voters.

So far, only the Federal Communications Commission (FCC) has taken decisive action on deepfakes at the federal level. In February, the commission clarified that federal law bans the use of generative AI to create human voices in robocalls in response to a New Hampshire call impersonating President Joe Biden during the Democratic primary. The robocall included Biden’s famous catchphrase “a bunch of malarky” and suggested that voting in the primary would preclude casting a vote in the November general election.

It was clear attempt at voter suppression that showcased the dangers that AI deepfakes can pose to democracy. An investigation into the New Hampshire robocall led back to a political consultant, a magician-turned-voice actor in New Orleans and two political consulting firms in Texas. One of those firms, Lingo Technologies, agreed to pay a $1 million civil penalty in a settlement with the FCC.

Under the leadership of FCC Chair Jessica Rosenworcel, the agency has also proposed new rules requiring disclosure to viewers when AI is used in political ads. However, the FCC rulemaking process can take months, if not years, and it’s unlikely the proposal will go into effect before the November elections.

Congress could pass legislation regulating AI and deepfake content, but the Republican-controlled House struggles to simply pass a budget, making congressional action unlikely anytime soon. State lawmakers in both parties have stepped up, and now New Hampshire is one of 40 states considering disclosure rules for AI-generated ads and content, according to Roll Call.

At the federal level, public interest watchdogs are now turning to the Federal Election Commission (FEC), which has the power to regulate campaigns and their communications with voters. However, the FEC has been largely at a standstill since 2006, when then-Senate Majority Leader Mitch McConnell realized that campaign finance enforcement could be stunted by appointing Republicans who won’t apply federal law to the evenly bipartisan FEC, according to Craig Holman, a Capitol Hill lobbyist for Public Citizen.

“Since then, enforcement and rulemaking by the FEC have ground largely to a halt,” Holman told Truthout in an email.

In response to a petition from Public Citizen, the FEC is proposing a new “interpretative rule” clarifying that existing law against fraudulently impersonating political candidates includes deepfakes generated by artificial intelligence. But watchdogs, including Public Citizen, say the bipartisan campaign regulators should go further and explicitly ban AI-generated media meant to fool voters with a convincing digital impersonation of political candidates.

“The compromise resolution on deepfakes is unique in that two Democratic commissioners have joined with two Republican commissioners in proposing a new regulation to enforce the existing campaign finance law,” Holman said. “However, in order to forge a bipartisan compromise, the resolution does not address the problem squarely and leaves open avenues for avoiding enforcement.”

“A proper resolution would have straightforwardly declared that misleading and harmful deepfakes produced and distributed by a candidate constitute fraudulent misrepresentation by that candidate,” Holman continued.

Public Citizen Co-President Robert Weissman was a bit more blunt, saying in a statement this week that the FEC appears to have “forgotten its purpose and mission, or perhaps its spine.” With deepfakes impacting elections across the world and popping up in the U.S. ahead of a crucial election in November, Weissman said the FEC should be working “actively to deter” political deepfakes, especially videos impersonating candidates and misinforming voters about their positions.

“The FEC’s new proposed ‘interpretive rule’ simply says that fraudulent misrepresentation law applies no matter what technology is used. That’s a resolution of a question that was never in doubt,” Weissman noted.

Still, Weissman said political consultants should know that political deepfakes distributed by candidates and campaigns violate the federal statue prohibiting fraudulent misrepresentation of another candidate or campaign. Whether the FEC will enforce even this rule remains to be seen.

We’re not backing down in the face of Trump’s threats.

As Donald Trump is inaugurated a second time, independent media organizations are faced with urgent mandates: Tell the truth more loudly than ever before. Do that work even as our standard modes of distribution (such as social media platforms) are being manipulated and curtailed by forces of fascist repression and ruthless capitalism. Do that work even as journalism and journalists face targeted attacks, including from the government itself. And do that work in community, never forgetting that we’re not shouting into a faceless void – we’re reaching out to real people amid a life-threatening political climate.

Our task is formidable, and it requires us to ground ourselves in our principles, remind ourselves of our utility, dig in and commit.

As a dizzying number of corporate news organizations – either through need or greed – rush to implement new ways to further monetize their content, and others acquiesce to Trump’s wishes, now is a time for movement media-makers to double down on community-first models.

At Truthout, we are reaffirming our commitments on this front: We won’t run ads or have a paywall because we believe that everyone should have access to information, and that access should exist without barriers and free of distractions from craven corporate interests. We recognize the implications for democracy when information-seekers click a link only to find the article trapped behind a paywall or buried on a page with dozens of invasive ads. The laws of capitalism dictate an unending increase in monetization, and much of the media simply follows those laws. Truthout and many of our peers are dedicating ourselves to following other paths – a commitment which feels vital in a moment when corporations are evermore overtly embedded in government.

Over 80 percent of Truthout‘s funding comes from small individual donations from our community of readers, and the remaining 20 percent comes from a handful of social justice-oriented foundations. Over a third of our total budget is supported by recurring monthly donors, many of whom give because they want to help us keep Truthout barrier-free for everyone.

You can help by giving today. Whether you can make a small monthly donation or a larger gift, Truthout only works with your support.