Facebook’s Secret Censorship Rules Protect White Men From Hate Speech But Not Black Children
(ProPublica) - A trove of internal documents sheds light on the algorithms that Facebook’s censors use to differentiate between hate speech and legitimate political expression.
In the wake of a terrorist attack in London earlier this month, a U.S. congressman wrote a Facebook post in which he called for the slaughter of “radicalized” Muslims. “Hunt them, identify them, and kill them,” declared U.S. Rep. Clay Higgins, a Louisiana Republican. “Kill them all. For the sake of all that is good and righteous. Kill them all.”
Higgins’ plea for violent revenge went untouched by Facebook workers who scour the social network deleting offensive speech.
But a May posting on Facebook by Boston poet and Black Lives Matter activist Didi Delgado drew a different response.
“All white people are racist. Start from this reference point, or you’ve already failed,” Delgado wrote. The post was removed and her Facebook account was disabled for seven days.
A trove of internal documents reviewed by ProPublica sheds new light on the secret guidelines that Facebook’s censors use to distinguish between hate speech and legitimate political expression. The documents reveal the rationale behind seemingly inconsistent decisions. For instance, Higgins’ incitement to violence passed muster because it targeted a specific sub-group of Muslims — those that are “radicalized” — while Delgado’s post was deleted for attacking whites in general reviewers, has deleted posts by activists and journalists in disputed territories such as Palestine, Kashmir, Crimea and Western Sahara.
One document trains content reviewers on how to apply the company’s global hate speech algorithm. The slide identifies three groups: female drivers, black children and white men. It asks: Which group is protected from hate speech? The correct answer: white men.
The reason is that Facebook deletes curses, slurs, calls for violence and several other types of attacks only when they are directed at “protected categories”—based on race, sex, gender identity, religious affiliation, national origin, ethnicity, sexual orientation and serious disability/disease. It gives users broader latitude when they write about “subsets” of protected categories. White men are considered a group because both traits are protected, while female drivers and black children, like radicalized Muslims, are subsets, because one of their characteristics is not protected. (The exact rules are in the slide show below.)
The Facebook Rules
Facebook has used these rules to train its "content reviewers" to decide whether to delete or allow posts. Facebook says the exact wording of its rules may have changed slightly in more recent versions.
Behind this seemingly arcane distinction lies a broader philosophy. Unlike American law, which permits preferences such as affirmative action for racial minorities and women for the sake of diversity or redressing discrimination, Facebook’s algorithm is designed to defend all races and genders equally.
“Sadly,” the rules are “incorporating this color-blindness idea which is not in the spirit of why we have equal protection,” said Danielle Citron, a law professor and expert on information privacy at the University of Maryland. This approach, she added, will “protect the people who least need it and take it away from those who really need it.”
But Facebook says its goal is different — to apply consistent standards worldwide. “The policies do not always lead to perfect outcomes,” said Monika Bickert, head of global policy management at Facebook. “That is the reality of having policies that apply to a global community where people around the world are going to have very different ideas about what is OK to share.”
Facebook’s rules constitute a legal world of their own. They stand in sharp contrast to the United States’ First Amendment protections of free speech, which courts have interpreted to allow exactly the sort of speech and writing censored by the company’s hate speech algorithm. But they also differ — for example, in permitting postings that deny the Holocaust — from more restrictive European standards.
The company has long had programs to remove obviously offensive material like child pornography from its stream of images and commentary. Recent articles in the Guardian and Süddeutsche Zeitung have detailed the difficult choices that Facebook faces regarding whether to delete posts containing graphic violence, child abuse, revenge porn and self-mutilation.
The challenge of policing political expression is even more complex. The documents reviewed by ProPublica indicate, for example, that Donald Trump’s posts about his campaign proposal to ban Muslim immigration to the United States violated the company’s written policies against “calls for exclusion” of a protected group. As The Wall Street Journal reported last year, Facebook exempted Trump’s statements from its policies at the order of Mark Zuckerberg, the company’s founder and chief executive.
The company recently pledged to nearly double its army of censors to 7,500, up from 4,500, in response to criticism of a video posting of a murder. Their work amounts to what may well be the most far-reaching global censorship operation in history. It is also the least accountable: Facebook does not publish the rules it uses to determine what content to allow and what to delete.
Users whose posts are removed are not usually told what rule they have broken, and they cannot generally appeal Facebook’s decision. Appeals are currently only available to people whose profile, group or page is removed.
The company has begun exploring adding an appeals process for people who have individual pieces of content deleted, according to Bickert. “I’ll be the first to say that we’re not perfect every time,” she said.
Facebook is not required by U.S. law to censor content. A 1996 federal law gave most tech companies, including Facebook, legal immunity for the content users post on their services. The law, section 230 of the Telecommunications Act, was passed after Prodigy was sued and held liable for defamation for a post written by a user on a computer message board.
The law freed up online publishers to host online forums without having to legally vet each piece of content before posting it, the way that a news outlet would evaluate an article before publishing it. But early tech companies soon realized that they still needed to supervise their chat rooms to prevent bullying and abuse that could drive away users.
America Online convinced thousands of volunteers to police its chat rooms in exchange for free access to its service. But as more of the world connected to the internet, the job of policing became more difficult and companies started hiring workers to focus on it exclusively. Thus the job of content moderator — now often called content reviewer — was born.
In 2004, attorney Nicole Wong joined Google and persuaded the company to hire its first-ever team of reviewers, who responded to complaints and reported to the legal department. Google needed “a rational set of policies and people who were trained to handle requests,” for its online forum called Groups, she said.
Google’s purchase of YouTube in 2006 made deciding what content was appropriate even more urgent. “Because it was visual, it was universal,” Wong said.
While Google wanted to be as permissive as possible, she said, it soon had to contend with controversies such as a video mocking the King of Thailand, which violated Thailand’s laws against insulting the king. Wong visited Thailand and was impressed by the nation’s reverence for its monarch, so she reluctantly agreed to block the video — but only for computers located in Thailand.
Since then, selectively banning content by geography — called “geo-blocking” — has become a more common request from governments. “I don’t love traveling this road of geo-blocking,” Wong said, but “it’s ended up being a decision that allows companies like Google to operate in a lot of different places.”
For social networks like Facebook, however, geo-blocking is difficult because of the way posts are shared with friends across national boundaries. If Facebook geo-blocks a user’s post, it would only appear in the news feeds of friends who live in countries where the geo-blocking prohibition doesn’t apply. That can make international conversations frustrating, with bits of the exchange hidden from some participants.
As a result, Facebook has long tried to avoid using geography-specific rules when possible, according to people familiar with the company’s thinking. However, it does geo-block in some instances, such as when it complied with a request from France to restrict access within its borders to a photo taken after the Nov. 13, 2015, terrorist attack at the Bataclan concert hall in Paris.
Bickert said Facebook takes into consideration the laws in countries where it operates, but doesn’t always remove content at a government’s request. “If there is something that violates a country’s law but does not violate our standards,” Bickert said, “we look at who is making that request: Is it the appropriate authority? Then we check to see if it actually violates the law. Sometimes we will make that content unavailable in that country only.”
Facebook’s goal is to create global rules. “We want to make sure that people are able to communicate in a borderless way,” Bickert said.
Founded in 2004, Facebook began as a social network for college students. As it spread beyond campus, Facebook began to use content moderation as a way to compete with the other leading social network of that era, MySpace.
MySpace had positioned itself as the nightclub of the social networking world, offering profile pages that users could decorate with online glitter, colorful layouts and streaming music. It didn’t require members to provide their real names and was home to plenty of nude and scantily clad photographs. And it was being investigated by law-enforcement agents across the country who worried it was being used by sexual predators to prey on children. (In a settlement with 49 state attorneys general, MySpace later agreed to strengthen protections for younger users.)
By comparison, Facebook was the buttoned-down Ivy League social network — all cool grays and blues. Real names and university affiliations were required. Chris Kelly, who joined Facebook in 2005 and was its first general counsel, said he wanted to make sure Facebook didn’t end up in law enforcement’s crosshairs, like MySpace.
“We were really aggressive about saying we are a no-nudity platform,” he said.
The company also began to tackle hate speech. “We drew some difficult lines while I was there — Holocaust denial being the most prominent,” Kelly said. After an internal debate, the company decided to allow Holocaust denials but reaffirmed its ban on group-based bias, which included anti-Semitism. Since Holocaust denial and anti-Semitism frequently went together, he said, the perpetrators were often suspended regardless.
“I’ve always been a pragmatist on this stuff,” said Kelly, who left Facebook in 2010. “Even if you take the most extreme First Amendment positions, there are still limits on speech.”
By 2008, the company had begun expanding internationally but its censorship rulebook was still just a single page with a list of material to be excised, such as images of nudity and Hitler. “At the bottom of the page it said, ‘Take down anything else that makes you feel uncomfortable,’” said Dave Willner, who joined Facebook’s content team that year.
Willner, who reviewed about 15,000 photos a day, soon found the rules were not rigorous enough. He and some colleagues worked to develop a coherent philosophy underpinning the rules, while refining the rules themselves. Soon he was promoted to head the content policy team.
By the time he left Facebook in 2013, Willner had shepherded a 15,000-word rulebook that remains the basis for many of Facebook’s content standards today.
“There is no path that makes people happy,” Willner said. “All the rules are mildly upsetting.” Because of the volume of decisions — many millions per day — the approach is “more utilitarian than we are used to in our justice system,” he said. “It’s fundamentally not rights-oriented.”
Over the past decade, the company has developed hundreds of rules, drawing elaborate distinctions between what should and shouldn’t be allowed, in an effort to make the site a safe place for its nearly 2 billion users. The issue of how Facebook monitors this content has become increasingly prominent in recent months, with the rise of “fake news” — fabricated stories that circulated on Facebook like “Pope Francis Shocks the World, Endorses Donald Trump For President, Releases Statement” — and growing concern that terrorists are using social media for recruitment.
While Facebook was credited during the 2010-2011 “Arab Spring” with facilitating uprisings against authoritarian regimes, the documents suggest that, at least in some instances, the company’s hate-speech rules tend to favor elites and governments over grassroots activists and racial minorities. In so doing, they serve the business interests of the global company, which relies on national governments not to block its service to their citizens.
One Facebook rule, which is cited in the documents but that the company said is no longer in effect, banned posts that praise the use of “violence to resist occupation of an internationally recognized state.” The company’s workforce of human censors, known as content