Three local branches of the Proud Boys , a violent gang characterized by SPLC as a “general hate” group, have sponsored Facebook ads. (Their pages are now inactive.) Self-described as “Western chauvinists,” the group has members who are part of the racist alt-right and are associated with white nationalist, anti-Muslim, and misogynistic rhetoric. In October 2018, racist skinheads joined Proud Boys members in an assault on protestors outside a New York City event hosted by the Metropolitan Republican Club that featured Proud Boys founder Gavin McInnes.
“Muslims have a problem with inbreeding,” claimed McInnes in an Islamophobic rant in 2018. “When you have mentally damaged inbreds…and you have a hate book called the Koran…you end up with a perfect recipe for mass murder.”
The Central Florida Proud Boys’ Facebook page posted an ad worth $305 and seven additional ads worth $100 or less. All were paid for by Tyler Ziolkowski, who goes by Tyler Whyte and founded the Florida Proud Boys. The page is inactive, but the page for a media website Central Florida Post, which is run by Roger Stone-connected Proud Boy Jacob Engels , is active and has spent $4,560 on Facebook ads since May 2018. Engels, who has written for Alex Jones’ conspiracy operation InfoWars , was banned from Twitter after an Islamophobic tweet.
The New Helvetia Proud Boys of Sacramento, California paid for two ads , one of which featured a photo of two Proud Boys with Stone and Fox News host Tucker Carlson, alleging that Stone and Carlson support their group. The ad, which cost under $100, was removed for an unspecified violation of Facebook’s ad policies.
The Proud Boys of Knoxville, Tennessee sponsored one ad , which Facebook removed after being online for a week.
Facebook’s ad disclosure qualifications are fairly broad, encompassing ads that address social issues as well as those that are overtly political or election-centered. Other major platforms’ databases are less comprehensive; for example, Google/YouTube’s political ad database only includes election-related ads since May 31, 2018. The anti-immigrant FAIR has spent nearly $90,000 on election-related Google ads since then, but few, if any, additional hate groups appear in the database.
Twitter’s Ad Transparency Center includes sections for political and issue ads, but its data is not tabulated or downloadable. FAIR has spent nearly $917,000 on Twitter ads since October 2018, but Sludge did not find any other SPLC-designated hate groups in the ad library.
The Political Ad Library of Snap, which owns Snapchat, does not appear to include any ads purchased by hate groups.
A “Disheartening” Approach to Hate Speech Moderation
Abbas Barzegar, director of the Research and Advocacy Department at the Council on American-Islamic Relations (CAIR), says that Facebook and other online platforms have failed the Muslim community.
“CAIR is acutely aware of the problem of hate activity online and the disinformation campaigns and conspiracy theories propagated by the Islamophobia network ,” he told Sludge . “It is disheartening to see that social media companies have not adjusted their business models to prevent the spread of hate and extremism online, especially as far-right fascism increasingly radicalizes white youth across the United States.”
Facebook is frequently criticized for inadequately enforcing its hate speech policy. A 2017 ProPublica investigation demonstrated uneven enforcement of its hate speech standards, and Facebook Vice President Justin Osofsky issued an apology. “We’re sorry for the mistakes we have made — they do not reflect the community we want to help build,” he said. “We must do better.”
An award-winning PBS Frontline documentary showed how malicious, anti-Muslim Facebook posts in developing countries led to violence and death. Hate speech and disinformation on Facebook contributed to deadly riots in Sri Lanka and “ethnic cleansing against Myanmar’s Rohingya minority,” according to The New York Times .
According to Facebook, it has improved. In its most recent Community Standards Enforcement Report , covering the fourth quarter of 2018 and the first quarter of 2019, Facebook says it is increasing the effectiveness of its internal hate speech detection, meaning that its reliance on user reporting is declining. The company proactively identified 65% of the hate speech content it removed during the first quarter of 2019, up from 24% in the fourth quarter of 2017.
But this detection appears mainly focused on individual posts, not on the accounts that do the posting. While Facebook may remove the occasional post by, for example, anti-Muslim activist Pamela Geller , she continues to operate a verified Facebook page that sends her followers and other Facebook users to her discriminatory website. (Geller’s Geller Report site has paid $3,359 for nine Facebook ads since May 2018, according to the latest numbers .)
Facebook has consulted SPLC, along with many other organizations, on hate speech issues. But for Hankes and SPLC, the company’s approach to hate speech moderation is deeply flawed. Facebook’s “policy enforcement on these issues does not align with the kind of stated and obvious tactical and strategic decisions of extremist groups when it comes to spreading their ideologies and recruiting people,” Hankes said.
“I’d expect [Facebook] to take some of the same steps they’ve used to deal with dangerous organizations [including terrorist groups and violent white nationalist outfits] when they’re dealing with ads placed by hate groups, which is banning the groups and taking more aggressive enforcement action as opposed to just looking at the ads individually as if they were random comments.”
The reason that Facebook allows many SPLC-designated hate groups to have accounts appears to revolve around its contrasting definitions of hate speech and hate groups. The company’s definition of hate speech is similar to SPLC’s definition of a hate group. “We define hate speech as a direct attack on people based on what we call protected characteristics — race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability,” state the Community Standards. “We also provide some protections for immigration status.”
But Facebook’s grounds for banning groups are less inclusive. While SPLC views hate groups as those that “attack or malign” entire classes of people, Facebook appears to ban only groups and individuals “that proclaim a violent mission or are engaged in violence,” including “organized hate” groups and terrorist organizations.
Thus, Facebook may take down a hate group’s post that explicitly attacks people based on a “protected characteristic,” but it wouldn’t ordinarily ban that group from its platform if the group didn’t have a mission Facebook considers violent. For example, it removed three pages of the Proud Boys, who advocate violence, but has let hate groups that are extremely discriminatory yet not explicitly violent remain. The contrasting definitions of hate speech and hate groups allow the company to take down some offensive posts but permit numerous hate groups to have a presence, posting, spending money, and recruiting on its platform.
Facebook’s hate policies are “misaligned with how extremist movements work,” Hankes said. “It is a decades-long tactic of these organizations to dress up their rhetoric using euphemisms and using softer language to appeal to a wider audience. They’re not just going to come out with their most extreme ideological viewpoints.”
A hate group’s Facebook page can be a gateway to more severe content, said Hankes. Soft-pedaling allows hate groups to “bring people in gradually…knowing full well that people who are amenable to that message might very well go to their website or go to whatever propaganda they’re operating and get exposed to more extreme rhetoric.”
On March 27, in a post entitled “Standing Against Hate,” Facebook announced “a ban on praise, support and representation of white nationalism and white separatism on Facebook and Instagram.”
Six months later, white nationalist groups including the VDARE Foundation and publisher Arktos still have active pages. Numerous other hate groups not included in the ad data maintain a presence on Facebook, including the hate-focused gift shop Dixie Republic, hate music label and distributor United Riot Records, and the anti-LGBTQ Liberty Counsel.
The Facebook spokesperson told Sludge that the Community Standards clearly state that hate groups are not allowed on Facebook. The company has a lengthy process to determine which groups are hate groups and doesn’t use any single organization or academic’s hate group designations; it consults with numerous organizations and experts in the U.S. and internationally, the spokesperson said. Facebook looks at organizations and their leaders that advocate or carry out violence against people based on based on race, religious affiliation, nationality, ethnicity, gender, sex, sexual orientation, serious disease, or disability.
To combat hate speech and organized hate groups, Facebook uses its employees, who monitor the posts and accounts; technology, to flag potential hate speech that a human employee then evaluates; and partnerships with academics and other hate group experts. Sludge asked Facebook which groups and individuals it has partnered with, but the spokesperson did not specify any.
On Monday at the United Nations, Facebook was one of the tech companies and members of the Global Internet Forum to Counter Terrorism who met to share their progress on the Christchurch Call to action, a global anti-hate initiative launched after the March 2019 New Zealand massacre, which was perpetrated and livestreamed on Facebook by a young white nationalist man who was radicalized online.
CAIR is taking part as well. “As part of the Change the Terms Coalition as well as now part of the Advisory Network of the Christchurch Call, CAIR is working closely with stakeholders to help tech companies confront the problem and find lasting solutions to bigotry, hate, and xenophobia,” said CAIR’s Barzegar. SPLC is also a Change the Terms member.
Exempting Politicians From Community Standards?
For a year now, Facebook has exempted politicians from its community standards, allowing their posts and ads to skip its third-party fact-checking process, something Facebook’s Vice President of Global Affairs and Communications Nick Clegg reiterated at The Atlantic Festival on Tuesday. Thus, politicians are free to spread misinformation and “fake news” as they see fit. Facebook currently defines politicians as “officials or candidates at the executive, national and regional levels,” according to Forbes .
Media Matters extremism researcher Natalie Martinez expressed a concern many have with this policy, calling the exemption “absolutely wild” given the “extremist, violent rhetoric on the right.”
Since 2016, Facebook has applied a “newsworthiness exemption” to its standards, meaning that posts that would otherwise be in violation can stay online if the company considers them sufficiently newsworthy. On Tuesday, Clegg announced that Facebook would “treat speech from politicians as newsworthy content that should, as a general rule, be seen and heard.” However, ads and posts that “endanger people” won’t be exempt from the standards.
Facebook says it has worked to crack down on “fake accounts” for the last three years. But allowing hate groups to remain on Facebook has dangerous consequences, said Hankes.
“We saw a 50% increase in white nationalist groups between the 2017 and the 2018 hate list . We listed the largest number of [overall] hate groups that we’ve ever counted last year,” he said.
“Hate groups were essentially for years allowed to sit there and operate with impunity because tech companies did not want to accept responsibility for them being on the platforms. And what we’re dealing with right now are the consequences of that.
“I think the technology companies played a very, very significant role in this and have truly failed to accept the responsibility that they should for the realities that we’re all having to live through.”