Facebook customizes each user’s online experience. Each of us has different friends, different likes, different group affiliations and different interests. As a result, Facebook’s algorithms perpetuate a different experience for each user, specifically designed to keep us hooked on the platform. That’s one key to the platform’s success — but it’s also what makes Facebook so dangerous. This hyper-customization means each of us lives in a different reality, with a unique set of beliefs and facts that are reinforced by the people and news sources that we follow, and the groups that we join.
In one Facebook reality, people believe that Joe Biden was legitimately elected president. In an alternate reality, people believe the election was stolen from Donald Trump due to widespread voting irregularities.
In other words, Facebook’s tools have allowed Trump’s bogus and dangerous claims of election impropriety and voter fraud to go viral.
Get our free emails
Journalists, scholars and civil-society groups continue to identify and debunk election-fraud claims that appear on Facebook. But it’s a difficult task as disinformation mushrooms and morphs. The New York Times has a page, “Daily Distortions,” that’s devoted to tracking viral disinformation. Researchers at Avaaz identified a network of Steve Bannon-driven voter-fraud disinformation spreaders, and flagged the exponential growth of “Stop the Steal” groups, which are mobilizing to stop a Biden presidency.
Facebook removed the networks Avaaz flagged, but speed, scope and continued vigilance are necessary for the enforcement actions to have a lasting impact. Removing the ubiquitous “Stop the Steal” groups has been the equivalent of pouring water on gremlins — they split and multiply, resulting in havoc and chaos across the platform.
Five days after the Associated Press declared Biden the winner of the election, Trump posted “WE WILL WIN!” along with a video that urged his followers to prove that the results were wrong. Facebook tacked a removable label to the post, which stated that Biden was the “projected” winner. The label itself is misleading — Biden’s victory is far more certain than that — and it’s done nothing to stop the spread of Trump’s deceitful message: The post has been liked, shared and commented on by hundreds of thousands of people.
In the ensuing days, Trump has had multiple posts proclaiming that he “won” the election — and Facebook has had the same inadequate response to each of them.
And there’s every reason to believe that Facebook will do even less to combat disinformation the further we get away from Election Day. In October, Mark Zuckerberg told his employees to expect fewer policy changes and content removals after the election.
Facebook knows that divisive and conspiratorial content drives engagement: A 2018 study by its own researchers found as much. But the platform’s executives buried the report, aware that doing anything to curtail engagement would threaten its massive growth rate and billion-dollar revenue streams. Another internal study found that 64 percent of those who joined an extremist group on the platform did so because Facebook’s algorithms recommended it to them.
Protecting the company’s users from disinformation should remain a top priority. In the absence of ongoing enforcement, bad actors will weaponize Facebook at ever greater rates to sow division and hate, destabilize our democracy, disenfranchise voters and poison our information ecosystem.
The fight against disinformation is as important during this post-election period as it was in the run-up to the vote. And given that Facebook has a metric to track “violence and incitement trends,” it seems that the company is at least aware enough to understand that threats to our democracy don’t just follow election cycles. Facebook’s ongoing efforts to tackle the spread of militarized and dangerous social movements like QAnon indicates that it understands at some level that it must remain vigilant against disinformation from people hellbent on destabilizing our democracy.
But is it vigilant enough? In short, no: Facebook could do much more to prevent bad-faith actors from gaming its systems. Instead the company accommodates these users and allows them to inundate the network with dangerous disinformation.
Disinformation is also being used as a tool to recruit and organize. Contrary to what Facebook wants us to believe, the disinformation does not have to appear in our personalized newsfeeds for it to destabilize the democratic process.
On Nov. 10, Facebook’s vice president of analytics and chief marketing officer, Alex Schultz, wrote a post that attempts to downplay the kind of content that often appears in the network’s top 10 list of most engaging posts. He claimed that engagement is not the same as “reach,” a term used to track how many people actually see a piece of content. In essence, Schultz was arguing that ignorance is bliss — if you don’t see something in your feed it allegedly has no effect on you.
But disinformation on Facebook too frequently jumps from the virtual to the real world. Joan Donovan, the research director of the Shorenstein Center on Media, Politics and Public Policy, has tracked the true costs of disinformation, as when right-wing militia groups set up “identity” checkpoints after believing that antifa activists had set the California and Oregon wildfires.
While antitrust action against the Silicon Valley giant is reportedly on the horizon, fixing Facebook’s ad-driven business model — which algorithmically amplifies hateful content and disinformation — requires different measures. We need to update privacy laws to protect the civil rights of platform users and prevent platforms’ misuse of their data. We need to tax platforms’ online-advertising revenues to support independent, conspiracy-busting journalism. And we need tech companies to strengthen their community standards and terms of service — and enforce those rules — to prevent the spread of hate and disinformation across their networks.
The Change the Terms coalition, for example, has developed model corporate policies designed to disrupt hate and disinformation on social media. These policies call on internet companies to moderate content in a transparent manner and open themselves up for regular audits. The policies also urge companies to create better tools for identifying and removing hateful activities — and to deplatform groups that recruit and organize violence online.
Transparency would also help us better understand the grave impacts of disinformation by providing researchers, scholars, and others with the data they need to deconstruct the company’s divisive algorithms. Shining a light on the inner workings of Facebook would go far toward fixing many of the platform’s problems. It’s time for Facebook to finally put the health of people and our democracy over profits.