This week, California Gov. Gavin Newsom made good on a promise to crack down on deepfakes by signing two bills into law. The bills require political advertisements to disclose any use of generative AI in the production process and also require social media companies to remove political ads that have been manipulated with AI within 72 hours of receiving a report from a user. Newsom vowed to pass such laws after Elon Musk shared a deepfake video of Vice President Kamala Harris in July. These measures come amid widespread concern about the impact deepfakes and misinformation could have on the presidential race. Musk has already attempted to leverage his platform for Donald Trump’s benefit by hosting a painfully dull conversation with the former president on X, spreading right-wing conspiracy theories, and boosting a xenophobic, anti-Black hoax about Haitian immigrants that Trump and JD Vance have amplified. In a recent post, Musk claimed, “Unless Trump is elected, America will fall to tyranny.”
While Newsom’s move to restrict deepfakes will be lauded as an essential step by many, the potential impact of AI-generated content on the presidential race has been greatly exaggerated. While programs that can produce deepfakes have become more accessible and affordable, the quality of these videos remains low and unconvincing. While the technology may improve and become more threatening, AI was never going to alter the outcome of this presidential race. The most immediate danger posed by video deepfakes is to private individuals, including children, whose images can be exploited to create artificial sexual content. Some states, such as California, have already passed laws banning the creation of such material.
Audio deepfakes, such as the fraudulent robocall that cloned President Biden’s voice during the New Hampshire primary, are more concerning, given that audio clones often create a more convincing product. Still, the audio deepfake of Biden’s voice did not impact the outcome of the New Hampshire primary, which Biden won by a wide margin. Like video deepfakes, audio deepfakes pose a greater threat to everyday people, as scammers can exploit them to impersonate loved ones, sending urgent pleas for financial help. However, unlike video deepfakes, audio deepfakes are not the stuff of viral trends.
The visibility of imagery created by programs like Musk’s Grok, which lacks the ethical safeguards that many AI image generators include, has led many people to worry about the influence of such content. AI-generated imagery may indeed have an impact, but it’s unlikely to be persuasive in the way some fear. While AI-produced media probably won’t convince a confused electorate of outright lies, such content does aid the right-wing memeification of grievance. Hoaxes about Haitian immigrants, videos mimicking politicians, and fake images depicting Harris as a Communist dictator are all part of a political and cultural strategy — what one might call “a firehose of bullshit.” The right’s relentless blasts of bullshit are meant to drown out, rather than refute, legitimate facts and grievances. The point is not to debate reality but to dominate it.
The images, editorial takes, and news we consume each day, at a rate our brains are wholly ill-equipped to process, are the proverbial water we’re swimming in. The more that water is polluted, the more disoriented we become. As we stumble over right-wing lies — fact-checking, refuting, and describing the detritus that surrounds us — we assume a defensive posture. The work of defending fascist targets takes precedence when people are under siege, and when the firehose of bullshit is firing full blast, people are constantly under siege. The fabric of any shared reality that extends beyond our political silos is constantly being patched, protected, and reinforced as variations of fascist narratives infiltrate the mainstream.
In the absence of coherence, men like Trump, Vance, and Musk hope people will simply default to cognitive bias, uplifting what feels true as emblematic of their politics. Many of the people sharing memes about immigrants eating cats know that they are sharing debunked information — including JD Vance, who first brought these stories into the presidential discourse. Vance recently told CNN’s Dana Bash, “If I have to create stories so that the American media actually pays attention to the suffering of the American people, then that’s what I’m gonna do.” Of course, when Vance refers to the “suffering of the American people,” he is not talking about Americans who are suffering in prisons, unhoused people, people who are being killed by police, or people who are dying because they cannot access adequate healthcare. He is using the words “the American people” to describe white Americans who feel Black people and immigrants are responsible for all of their misfortunes.
While shameful, Vance’s comment also presents an opportunity to understand the fascistic mindset in matters of storytelling. The far right is creating emblematic narratives for people to latch onto. These stories do not need to be believed to be weaponized as representative of white people’s lived experiences. The goal is to override any discussion of the facts with a torrent of bullshit so that people can choose the emblems that suit them, wear those emblems, and charge into battle. If a particular story is shot down, we are simply challenged to understand why it resonates in the first place. If one story feels too absurd or has been too thoroughly debunked, there are plenty of others to choose from.
Musk has repeatedly characterized hoaxes and conspiracy theories as “concerning” or as the “actual truth.” He responded to a post blaming “mass migration” for the racist lynch mobs in the UK by declaring “civil war is inevitable.” Most recently, he complained, in a now-deleted post, that “no one is even trying to assassinate Biden/Kamala.” While Musk is no strategic genius and lacks prowess as a content creator, he is participating in a political project that does not require brilliance or finesse. The fascist politics Musk has immersed himself in are not coherent or consistent, and the assertions made by right-wing politicians and influencers rarely hold up to scrutiny. Fascist rhetoric agitates and affirms those who appreciate it and disorients and preoccupies those who are threatened by it. Such ideas engulf public discourse and replace any coherent exchange of information or ideas with an endless storm of myth and vitriol. If “a firehose of bullshit” seems too vulgar a metaphor, one could say Musk seeks to turn the media landscape into a snow globe that he can grip and shake in his grubby billionaire hands, engulfing us in artificial storms at his whim.
On Wednesday, Musk posted:
Atheism left an empty spaceSecular religion took its place
But left the people in despair
Childless hedonism sans care
Maybe religion’s not so bad
To keep you from being sad
I have written in the past about Musk’s relationship with the transhumanist and longtermist cults of Silicon Valley. Transhumanism was born out of a recognition that, in increasingly secular societies, many people longed for the certainty, structural understanding, and sense of purpose that religion can impart. In place of traditional religions, the tech world has seen the emergence of TESCREAL ideologies that justify the existence of billionaires and indulge fantasies about space exploration, eternal (digital) life, and the idea that we are living in a simulation. Musk’s argument that “Maybe religion’s not so bad” isn’t a sales pitch for Christianity but for the religiosity of his fascistic worldview.
According to Musk’s philosophy, most of us are non-player characters (NPCs). That gamer mindset, which labels most of us as lacking independent thought, purpose, or agency, is a fascistic worldview that effectively classifies the majority of humanity as subhuman.
In early September, Musk reposted a 4chan argument claiming that only “high T alpha males” and “aneurotypical people” are capable of critical thinking:
People who can’t defend themselves physically (women and low T men) parse information through a consensus filter as a safety mechanism. They literally do not ask “is this true”, they ask “will others be OK with me thinking this is true”. This makes them very malleable to brute force manufactured consensus; if every screen they look at says the same thing they will adopt that position because their brain interprets it as everyone in the tribe believing it. Only high T alpha males and aneurotypical people (hey autists!) are actually free to parse new information with an objective “is this true?” filter. This is why a Republic of high status males is best for decision making. Democratic, but a democracy only for those who are free to think.
Musk reposted the quote with the caption, “Interesting observation.” This openly fascistic rhetoric categorizes the majority of us as incapable of independent thought. In Musk’s view, we are non-player characters and, therefore, expendable subhumans.
Musk’s acquisition of Twitter gave him possession of a vehicle for mass communication that has been crucial to the circulation of news globally and to the proliferation of movements. Twitter also played an important role in allowing marginalized groups to shape popular discourse. One of Musk’s motivations for acquiring Twitter was his fascistic mission to disempower people and groups who he blamed for spreading the “woke mind virus.” By influencing social standards and popularizing their cultural concerns and political grievances, marginalized people were upending dynamics that Musk believes are natural and innate – dynamics that privilege white, “high T alpha males” at the expense of others, who he views as fundamentally inferior, and less deserving of agency and survival.
Movements have continued to utilize Musk’s platform to raise awareness about protests and global atrocities, but the site’s utility for people of conscience has waned, while its usefulness to fascists has grown exponentially. While Musk lacks the posting acumen to have the kind of influence he truly longs for, he has succeeded at further polluting the well of public discourse. How will this climate and Musk’s machinations influence the upcoming election? Some of Musk’s maneuvers on that front are already underway. Musk’s AI chatbot, Grok, which is tied to his social media platform, has spread misinformation about Vice President Kamala Harris’ ballot eligibility in this election. Musk’s super PAC, AmericaPAC, has been accused of misleading voters about whether or not they have registered to vote while also harvesting their data. Election officials say that Musk’s conspiracy-minded posts alleging that undocumented immigrants will be voting — including Musk’s false claim that as many as two million non-citizens have been registered to vote in Texas, Arizona, and Pennsylvania — often coincide with increased demands that voting rolls be purged. These waves of anti-immigrant panic also cause officials to worry about violent threats.
As Melissa Gira Grant has recently written, Trump appears poised to invoke narratives about undocumented immigrants voting to undermine the results of the election should he lose the race to Harris. Tropes about undocumented people voting have been trotted out by Republicans for decades to justify voter suppression tactics, such as voter ID laws. However, as Gira writes:
The more recent twist now is to construct a scheme they say is led by Democrats to use migrants who have recently arrived to commit (nonexistent) voter fraud, sometimes seizing on the work of groups aiding recently arrived migrants, and capitalizing on social media’s power to make a lie louder than the truth.
As the election approaches, I fully expect Musk to shake his snow globe as furiously as he can, spreading hoaxes about voter fraud and other lies about undocumented immigrants. I’m also concerned that he may attempt to incite unrest or foster a sense of danger in specific voting districts. Musk’s comments about the inevitability of “civil war” and his practice of amplifying dehumanizing lies all suggest a willingness to stoke violence to shape electoral outcomes. Musk, of course, is deeply invested in the political trajectory of J.D. Vance, who some have argued is a political extension of fascistic tech billionaire Peter Thiel. While Musk heaps praise on Trump, I believe his real investment in this election is putting Vance, who is a product of Silicon Valley, one 78-year-old heartbeat away from the presidency.
We are in the midst of Silicon Valley’s biggest power play in the realm of national politics. It comes at a time when Big Tech is ramping up its contributions to climate chaos while over-investing in technologies the market seems poised to reject — products that, conveniently enough, are highly compatible with fascistic governance. While Elon Musk is no strategic genius, it’s important to remember that the same can be said of Donald Trump. Neither of these men are brilliant thinkers, yet one has already ascended to the presidency. While various factors contributed to Trump’s rise in 2016, his presence on Twitter is widely regarded as a key element in his success. Social media is an unwieldy, unpredictable tool, but it is full of unstable political potential.
It’s also important to recognize that a decisive victory for the Democrats would not end the threat techno-fascists like Musk pose. While Musk is heavily invested in this election, this moment is merely a stage in a much larger project for men like Musk and Thiel. If Trump falls, the techno-fascists will push on, and their efforts to rewire the information landscape will continue. With journalism, as an industry, more or less collapsing, we are incredibly vulnerable to such moves. To reduce that vulnerability, we need to double down on the creation of new information networks outside of Musk’s app, support publications and journalists whose work we believe in, and develop our own best practices around consuming, verifying, and spreading information.