When pop star Taylor Swift posted her much-anticipated endorsement of Kamala Harris for president on Instagram this week, she explained that the other candidate, Donald Trump, pushed her to be crystal clear about how she plans to vote. Trump recently reposted on his social media site Truth Social doctored images falsely purporting to show Swift and blonde-haired fans endorsing his campaign instead.
As Swift noted, the images are part of a wave of online content known as “deepfakes” that are generated by emerging artificial intelligence, or AI, in order to manipulate public discourse and trick people into believing scams. Such content can be found across the internet, but deepfakes proliferate on networks with little moderation, such as Trump’s Truth Social or Elon Musk’s X.
“Recently I was made aware that AI of ‘me’ falsely endorsing Donald Trump’s presidential run was posted to his site,” Swift wrote in her endorsement of Harris. “It really conjured up my fears around AI, and the dangers of spreading misinformation.”
Thanks to rapid advances in AI technology, “deepfake” videos, images, voice recordings and other meme-ready content that falsely impersonate politicians and celebrities are now a fact of life. With Congress sharply divided along party lines and no laws specifically addressing hyper-realistic, AI-generated imposters and scams on the books, federal regulators and state lawmakers are scrambling to catch up.
Some of Musk’s own fans recently learned about the dangers of deepfakes the hard way after clicking on a fake but convincing video of the controversial tech billionaire promoting an investment scam. The video received tens of thousands of views on social media platforms such as Facebook, and victims reported losing thousands of dollars to unknown scammers.
Musk has embraced right-wing politics and conspiracy theories since buying the social media site formerly known as Twitter, and in late July he shared a parody deepfake himself, featuring an AI-generated parody video that purported to feature Vice President Kamala Harris’s voice to millions of followers. The video uses a voice-cloning tool to mimic the presidential candidate, revealing AI’s potential to mislead voters.
Some deepfakes appear to be harmless parodies, but political campaigns are already deploying AI-generated ads that look like actual candidates and events but are in fact fake, according to the watchdog group Public Citizen. Originally passed in 1972, the Federal Election Campaign Act prohibits candidates for public office from impersonating other candidates in public, and it’s also a crime to impersonate a political candidate in order to fundraise off their name. Such tactics are both fraudulent and would mislead voters.
So far, only the Federal Communications Commission (FCC) has taken decisive action on deepfakes at the federal level. In February, the commission clarified that federal law bans the use of generative AI to create human voices in robocalls in response to a New Hampshire call impersonating President Joe Biden during the Democratic primary. The robocall included Biden’s famous catchphrase “a bunch of malarky” and suggested that voting in the primary would preclude casting a vote in the November general election.
It was clear attempt at voter suppression that showcased the dangers that AI deepfakes can pose to democracy. An investigation into the New Hampshire robocall led back to a political consultant, a magician-turned-voice actor in New Orleans and two political consulting firms in Texas. One of those firms, Lingo Technologies, agreed to pay a $1 million civil penalty in a settlement with the FCC.
Under the leadership of FCC Chair Jessica Rosenworcel, the agency has also proposed new rules requiring disclosure to viewers when AI is used in political ads. However, the FCC rulemaking process can take months, if not years, and it’s unlikely the proposal will go into effect before the November elections.
Congress could pass legislation regulating AI and deepfake content, but the Republican-controlled House struggles to simply pass a budget, making congressional action unlikely anytime soon. State lawmakers in both parties have stepped up, and now New Hampshire is one of 40 states considering disclosure rules for AI-generated ads and content, according to Roll Call.
At the federal level, public interest watchdogs are now turning to the Federal Election Commission (FEC), which has the power to regulate campaigns and their communications with voters. However, the FEC has been largely at a standstill since 2006, when then-Senate Majority Leader Mitch McConnell realized that campaign finance enforcement could be stunted by appointing Republicans who won’t apply federal law to the evenly bipartisan FEC, according to Craig Holman, a Capitol Hill lobbyist for Public Citizen.
“Since then, enforcement and rulemaking by the FEC have ground largely to a halt,” Holman told Truthout in an email.
In response to a petition from Public Citizen, the FEC is proposing a new “interpretative rule” clarifying that existing law against fraudulently impersonating political candidates includes deepfakes generated by artificial intelligence. But watchdogs, including Public Citizen, say the bipartisan campaign regulators should go further and explicitly ban AI-generated media meant to fool voters with a convincing digital impersonation of political candidates.
“The compromise resolution on deepfakes is unique in that two Democratic commissioners have joined with two Republican commissioners in proposing a new regulation to enforce the existing campaign finance law,” Holman said. “However, in order to forge a bipartisan compromise, the resolution does not address the problem squarely and leaves open avenues for avoiding enforcement.”
“A proper resolution would have straightforwardly declared that misleading and harmful deepfakes produced and distributed by a candidate constitute fraudulent misrepresentation by that candidate,” Holman continued.
Public Citizen Co-President Robert Weissman was a bit more blunt, saying in a statement this week that the FEC appears to have “forgotten its purpose and mission, or perhaps its spine.” With deepfakes impacting elections across the world and popping up in the U.S. ahead of a crucial election in November, Weissman said the FEC should be working “actively to deter” political deepfakes, especially videos impersonating candidates and misinforming voters about their positions.
“The FEC’s new proposed ‘interpretive rule’ simply says that fraudulent misrepresentation law applies no matter what technology is used. That’s a resolution of a question that was never in doubt,” Weissman noted.
Still, Weissman said political consultants should know that political deepfakes distributed by candidates and campaigns violate the federal statue prohibiting fraudulent misrepresentation of another candidate or campaign. Whether the FEC will enforce even this rule remains to be seen.
Trump is busy getting ready for Day One of his presidency – but so is Truthout.
Trump has made it no secret that he is planning a demolition-style attack on both specific communities and democracy as a whole, beginning on his first day in office. With over 25 executive orders and directives queued up for January 20, he’s promised to “launch the largest deportation program in American history,” roll back anti-discrimination protections for transgender students, and implement a “drill, drill, drill” approach to ramp up oil and gas extraction.
Organizations like Truthout are also being threatened by legislation like HR 9495, the “nonprofit killer bill” that would allow the Treasury Secretary to declare any nonprofit a “terrorist-supporting organization” and strip its tax-exempt status without due process. Progressive media like Truthout that has courageously focused on reporting on Israel’s genocide in Gaza are in the bill’s crosshairs.
As journalists, we have a responsibility to look at hard realities and communicate them to you. We hope that you, like us, can use this information to prepare for what’s to come.
And if you feel uncertain about what to do in the face of a second Trump administration, we invite you to be an indispensable part of Truthout’s preparations.
In addition to covering the widespread onslaught of draconian policy, we’re shoring up our resources for what might come next for progressive media: bad-faith lawsuits from far-right ghouls, legislation that seeks to strip us of our ability to receive tax-deductible donations, and further throttling of our reach on social media platforms owned by Trump’s sycophants.
We’re preparing right now for Trump’s Day One: building a brave coalition of movement media; reaching out to the activists, academics, and thinkers we trust to shine a light on the inner workings of authoritarianism; and planning to use journalism as a tool to equip movements to protect the people, lands, and principles most vulnerable to Trump’s destruction.
We urgently need your help to prepare. As you know, our December fundraiser is our most important of the year and will determine the scale of work we’ll be able to do in 2025. We’ve set two goals: to raise $150,000 in one-time donations and to add 1,500 new monthly donors.
Today, we’re asking all of our readers to start a monthly donation or make a one-time donation – as a commitment to stand with us on day one of Trump’s presidency, and every day after that, as we produce journalism that combats authoritarianism, censorship, injustice, and misinformation. You’re an essential part of our future – please join the movement by making a tax-deductible donation today.
If you have the means to make a substantial gift, please dig deep during this critical time!
With gratitude and resolve,
Maya, Negin, Saima, and Ziggy