Facebook is currently dealing with a debate that is as old as the human race itself: Does a person have the right to say hurtful things?
On Tuesday, Facebook responded to a May 21 open letter submitted by Women, Action & the Media, the Everyday Sexism Project and activist Soraya Chemaly. The letter asserted that the company may have unevenly applied its community standards on gender-based offensive content.
Facebook finds itself in a quandary. At one extreme, the company wants to remain open and permit all content from its users, including off-color comments and jokes. On the other extreme, Facebook must be careful not to alienate its patrons. The firestorm was set off by discussions about the distinction between jokes about rape and jokes about rape culture, as well as recent offensive jokes from Daniel Tosh and Sam Morril.
“We work hard to remove hate speech quickly, however there are instances of offensive content, including distasteful humor, that are not hate speech according to our definition,” Facebook says. “In these cases, we work to apply fair, thoughtful, and scalable policies.”
Facebook acknowledges that it has quickly removed non-gender-based posts that were not “calls for action” — for example, a page to plan hate crimes — as well as non-specific hate speech, even though the company did not remove gender-based hate speech in a timely manner. Facebook also admits that “content that should be removed has not been or has been evaluated using outdated criteria.”
Allegations of misogyny could be dangerous for Facebook.
“Facebook’s decision to remove hateful speech is censorship, pure and simple. But it’s in no way heavy handed. It’s probably one of the smartest strategic moves the network has made,” said Maggie Edinger, a New York-based public relations professional, in an interview with Mint Press. “The recent outrage that led to Facebook’s most recent policy move was specifically over misogynistic content. Pew’s recent Internet report found that Facebook draws 72 percent of all women online vs. 62 percent of men. If nearly three-quarters of your traffic is women, and your site has an ongoing problem with fringe male users propagating content that indisputably demeans women, you have no choice but to eliminate this content or risk alienating the majority of your users.”
The question of free speech
This brings up the question of free speech. Does a person have the right to say what he or she feels? Isn’t the determination of what is harmful subjective? As such, wouldn’t it make sense that any objective determinations of offensiveness would be a de facto measure of censorship?
To answer these questions, all of the elements of the situation must be understood. First, it should be explained that Facebook uses a public complaint system to allow visitors to report offensive pages and materials. Charles Palmer, an associate professor of new media and the executive director of the Center for Advanced Entertainment and Learning Technologies at the Harrisburg University of Science and Technology, argues that “identifying and removing hateful speech and images is a manual process and it can’t be done quick enough for the general public. Facebook is relying on the public to report inappropriate images but that is a slow and tedious process.”
Second, no one in America has a right to universal free speech, despite popular opinion. What the First Amendment to the U.S. Constitution guaranteed and what the courts have generally accepted is the assurance that no government — on the federal, state or local levels — can make a law that denies a person free speech. Outside the government, it is perfectly legal to restrict free speech in private establishments and businesses — as the individual is expected to adhere to the expectations and standards of the establishment.
“Facebook is a private company and thus has the right to censor whatever it wishes to,” said Jillian York, director for international freedom of expression for the Electronic Frontier Foundation. “Under the law, this isn’t censorship — it’s the whim of a private business.”
Ownership of one’s words
However, this does not limit the rights for the individual to say whatever he or she wants. 1969’s Supreme Court case Brandenburg v. Ohio established that no law can be made to limit inflammatory or hurtful speech — or even speech advocating violence — unless the speech “is directed to inciting or producing imminent lawless action and is likely to incite or produce such action.”
In other words, it’s legally okay to scream, “Fire!” in a crowded theater, but it is morally reprehensible. Gabe Rottman, legislative counsel and policy advisor for the American Civil Liberties Union, points out that there are no hate speech laws in the United States due to Brandenburg v. Ohio. Companies have no legal expectation or obligation to safeguard free speech or to protect users from hateful speech. Rather, companies bear the prerogative to establish and enforce fair speech rules as they wish. However, as Rottman pointed out, it is the hope and expectation that speech online is left open and unhindered.
According to Rottman, the best way to deal with hate online is not to censor it, but to allow other users to address it and enforce group expectations — in other words, to “hate the hate.” Using the screaming-in-a-crowded-theater metaphor, the theater’s patrons would ideally enforce the “societal expectations” on the aggravator.
However, the question of whether Facebook should take on this responsibility is a complicated one that is not easily answered.
“Facebook’s policy professes to create an environment to connect the world, but that world consists of cultures with radically different ideas on who or what should be free from criticism, be it caustic or benign,” said Adam Steinbaugh, an online security legal expert. “That will create turbulence for Facebook in implementing its policies even-handedly. International dialogue online means that people will be exposed to hateful speech — and, with it, speech contrary to hateful ideologies. Burying hate speech in hope that its adherents will disappear only serves to lull people into thinking that racism, sexism, homophobia, and so forth, have gone away.“
“I’m skeptical as to whether Facebook can effectively implement its policy,” Steinbaugh continued. “Different employees policing a vast field of content will necessarily result in an uneven application of a subjective policy. That risks controversy every time Facebook opts not to remove content — or when it removes content that some perceive to be critical, but not hateful.”