More than 50 congressional lawmakers and 30 organizations have urged the Federal Election Commission to regulate the use of deceptive artificial intelligence in campaign ads in support of a petition from the consumer rights group Public Citizen.
While the Federal Election Campaign Act, which established the FEC, does not address the use of deceptive AI explicitly, federal campaign finance law does prohibit politicians and those working for them from posing as another campaign. Public Citizen has argued that the provision on “fraudulent misrepresentation” should apply to deliberately deceptive AI-generated content that falsely shows a federal candidate saying or doing something they did not.
The petition, which Public Citizen first filed in July, comes as candidates and political groups begin to experiment with AI in campaign ads, raising concerns about the technology’s potential to disrupt U.S. elections. The internet is already awash with manipulated content, but generative AI technology now makes it possible for anyone with no training and little, if any, money to quickly fabricate so-called deepfake images and audio recordings that could soon be indistinguishable from genuine content. Political experts warn that the technology could accelerate the spread of false or misleading information — and sow doubts about election integrity among an already skeptical public.
“If voters lose trust in elections, they lose trust in the pillar of democracy,” Craig Holman, an expert on government ethics and a lobbyist for Public Citizen, told OpenSecrets. “That’s the danger.”
Multiple stakeholders submitted comments supporting Public Citizen’s position, including 51 U.S. Senate and House Democrats who also pressed the FEC to require disclaimers on political ads that use AI-generated content. The Partnership on AI — a nonprofit coalition of more than 100 advocacy groups, academic organizations, media and tech companies — urged quick action as well, citing “the speed and scale with which highly-realistic synthetic media can be created and shared using increasingly accessible generative AI.” Several leading companies involved in the development of artificial intelligence are part of the partnership, including OpenAI, Adobe, Alphabet, Meta and Microsoft.
Many comments noted that candidates are already deploying AI against their opponents in the leadup to 2024 elections.
In May 2023, former President Donald Trump’s campaign released a video clip that used deepfaked voices of Elon Musk and George Soros to mock Republican Florida Gov. Ron DeSantis’ glitch-filled presidential campaign announcement on X, the social media platform then known as Twitter.
The following month, a Twitter account associated with DeSantis’ campaign released an ad containing AI-generated images of Trump embracing Anthony Fauci, the former director of the National Institute of Allergy and Infectious Diseases. The fake images, which appeared alongside genuine photos of the two men, painted the former president as an ally of Fauci, whose response to the COVID-19 pandemic has been criticized by some conservatives.
Never Back Down, a pro-DeSantis super PAC, also posted an audio clip that used AI to give voice to Trump’s attacks on Iowa Republican Gov. Kim Reynolds. Although the audio accurately reflected what the former president had written about Reynolds on his social media platform Truth Social, Trump never spoke those words.
Public Citizen’s petition also references Paul Vallas, the runner-up in Chicago’s 2023 mayoral race. State and local elections are beyond the authority of the FEC, but the election offers a cautionary tale about the disruptive power of deepfakes.
On the eve of the Feb. 28 mayoral election, a Twitter account calling itself Chicago Lakefront News posted a fake audio recording in which a voice indistinguishable from Vallas’ can be heard downplaying police brutality. The hyper-realistic clip was quickly debunked as false and removed from the social media platform — but not before it was shared by thousands of people, according to the Hill. Representatives from X did not respond to requests for information about the incident.
Vallas went on to win a plurality but not a majority of the vote, triggering an April run-off election that he lost to now-Mayor Brandon Johnson by four percentage points. While Vallas does not blame the fake recording for his loss, he thinks that deepfakes can cause irreparable harm to a campaign.
“The problem when you do something like that is even when it’s proven to be doctored, you still suffer the damage from it,” Vallas said. “You throw out a big lie out there, and maybe half of the people realize it’s a lie, but the other half don’t.”
FEC Chair Dara Lindenbaum told OpenSecrets that if the commission were to draft a rule regulating the use of generative AI, it would likely not come into effect during the 2024 presidential election. She said a campaign could ask the commission to weigh the issue in an advisory opinion, which would need to be issued within 60 days of receiving a complete request.
Holman conceded that some examples of AI-generated content in political ads fall outside the authority of the FEC. Public Citizen’s request for rulemaking is narrowly tailored to federal campaign finance law’s prohibition on “fraudulent misrepresentation,” which would not completely prohibit the use of AI in campaign ads, only its use in deceptive deepfakes. The existing provision was inspired by the 1972 presidential election, when operatives for President Richard Nixon’s reelection campaign published documents falsely attributed to his potential challenger U.S. Sen. Edmund Muskie (D-Maine).
“You can pretty easily transfer that to the current day,” said Varsha Midha, a law student at Harvard’s Election Law Clinic who helped draft comments in support of the petition. “If they used an AI-generated deepfake of Muskie, rather than stealing his campaign stationary, that to us seems not to be a difference in the kind of the fraudulent misrepresentation, but rather a difference in the tools and technology used.”
Dissenting comments didn’t center around whether AI-generated deepfakes are damaging, but whether or not the FEC has the authority to act on Public Citizen’s petition.
“Our opposition to Public Citizen’s petition should not be confused with support or approval of the kinds of advertising they wish to prohibit,” read comments from the D.C.-based Holtzman Vogel law firm, which did not respond to interview requests. “Rather, the Commission is bound by its statute, and not everything that seems wrong or immoral is prohibited by that statute. If enough people believe that ‘there ought to be a law’ here, then Congress has the ability to act.”
The Antonin Scalia Law School Administrative Law Clinic, based out of George Mason University, also argued that the proposed regulations fall outside of the FEC’s scope.
“By its text, [the fraudulent misrepresentation provision] penalizes only misrepresentation of campaign authority or who has sponsored the message,” the clinic wrote in comments to the FEC. “While the statute prohibits a person from fraudulently representing that he or she works for another candidate (using AI or otherwise), it does not extend to fraud more generally.”
The comment goes on to quote FEC Commissioner Allen Dickerson, who has expressed doubts about the FEC’s authority to regulate deepfakes.
“The statute is carefully limited and is directed at fraudulent agency,” Dickerson said at an FEC meeting in August. “In other words, it is directed at fraudulently pretending that you, yourself represent or work for another candidate. It does not reach fraudulently claiming that your opponent said or did something that he or she did not do. It would be news to many to learn that the FEC may police telling lies about their opponents.”
Sentiments on the FEC’s ability to regulate appear split on partisan lines. The Republican National Committee’s comment echoes the argument of Dickerson, a Trump-appointed Republican member of the FEC. The Democratic National Committee’s comment argues that misrepresentative AI is dangerous and the FEC has the authority to regulate it.
But groups submitting comments on both sides of the issue, including comments from Public Citizen itself, argue that the bulk of the fight against deceptive AI-created political media is in Congress’s hands.
“It’s really important for the FEC to be clear about where its authority ends, so that other actors in the space, whether it’s other agencies or Congress, know where they need to pick up,” said Mason Kortz, a clinical instructor at the Harvard Cyber Law Clinic.
In May, Sen. Amy Klobuchar (D-Minn.) and Rep. Yvette Clarke (D-N.Y.) introduced legislation in the Senate and the House to require disclaimers on political ads that use generative AI. The REAL Political Ads Act would increase transparency and trust in political ads, Clarke told OpenSecrets.
In October, Klobuchar and Clarke also sent a letter to X and Meta, the parent company of Facebook and Instagram, requesting information on steps taken by the companies to limit the spread of AI-generated misinformation on social media.
A bipartisan majority of U.S. adults support the government taking measures to rein in AI in political ads in some form, according to an October poll from the Associated Press-NORC Center for Public Affairs Research. Sixty-six percent said they favored the federal government banning AI-generated content that contains false or misleading images in political ads. Another 62% support politicians making a pledge not to use AI-generated content in their campaigns.
Help us Prepare for Trump’s Day One
Trump is busy getting ready for Day One of his presidency – but so is Truthout.
Trump has made it no secret that he is planning a demolition-style attack on both specific communities and democracy as a whole, beginning on his first day in office. With over 25 executive orders and directives queued up for January 20, he’s promised to “launch the largest deportation program in American history,” roll back anti-discrimination protections for transgender students, and implement a “drill, drill, drill” approach to ramp up oil and gas extraction.
Organizations like Truthout are also being threatened by legislation like HR 9495, the “nonprofit killer bill” that would allow the Treasury Secretary to declare any nonprofit a “terrorist-supporting organization” and strip its tax-exempt status without due process. Progressive media like Truthout that has courageously focused on reporting on Israel’s genocide in Gaza are in the bill’s crosshairs.
As journalists, we have a responsibility to look at hard realities and communicate them to you. We hope that you, like us, can use this information to prepare for what’s to come.
And if you feel uncertain about what to do in the face of a second Trump administration, we invite you to be an indispensable part of Truthout’s preparations.
In addition to covering the widespread onslaught of draconian policy, we’re shoring up our resources for what might come next for progressive media: bad-faith lawsuits from far-right ghouls, legislation that seeks to strip us of our ability to receive tax-deductible donations, and further throttling of our reach on social media platforms owned by Trump’s sycophants.
We’re preparing right now for Trump’s Day One: building a brave coalition of movement media; reaching out to the activists, academics, and thinkers we trust to shine a light on the inner workings of authoritarianism; and planning to use journalism as a tool to equip movements to protect the people, lands, and principles most vulnerable to Trump’s destruction.
We urgently need your help to prepare. As you know, our December fundraiser is our most important of the year and will determine the scale of work we’ll be able to do in 2025. We’ve set two goals: to raise $130,000 in one-time donations and to add 1422 new monthly donors by midnight on December 31.
Today, we’re asking all of our readers to start a monthly donation or make a one-time donation – as a commitment to stand with us on day one of Trump’s presidency, and every day after that, as we produce journalism that combats authoritarianism, censorship, injustice, and misinformation. You’re an essential part of our future – please join the movement by making a tax-deductible donation today.
If you have the means to make a substantial gift, please dig deep during this critical time!
With gratitude and resolve,
Maya, Negin, Saima, and Ziggy