Over the past few weeks, two interesting digital freedom pieces have caught our eye, both related to Facebook.

The first is an article by Georgetown Law Professor and New Republic editor Jeffrey Rosen, who visited Facebook’s Menlo Park headquarters to peer into the content policy team which implements and regulates free speech on the social media platform. The employees look just like the other flip-flop wearing staff, except for what flashes across their computer screens—porn, hate speech, and vitriol. When content is flagged by Facebook users, these employees serve as first responders to make split-second decisions as to whether to restore the content or remove it from the site. According to Rosen, the staff examine the “time, place, method, and target” of the content and will remove it if three of the four criteria are objectionable. Hate speech targeting institutions is tolerated, but not hate speech targeting protected groups defined by race, ethnicity, national origin, religion, sex, gender, sexual orientation, disability, or medical condition. As Rosen writes:

It’s only when a user categorically reviles a protected group that he crosses the line: “I hate Islam” or “I hate the Pope” is fine; “I hate Muslims” or “I hate Catholics” is not.

If the staff are unable to make a determination (in about 20 seconds), they send the content up to the “Deciders,” typically attorneys working in the higher echelons of the public policy team. The upshot of this process is that the Facebook staff, and their counterparts at social media giants such as Twitter and Google’s YouTube, are guiding free expression online at a global scale. But they also take their role seriously, as Rosen found after joining them for a series of meetings on Hate Speech:

The recent meetings, though not intended to produce a single hate-speech standard, seem to have bolstered the Deciders’ belief in the necessity of embracing the challenges of their unique positions and, perhaps in some cases, how much they relish the work.

The second development that caught our eye relates specifically to Facebook’s new push to address content identified as misogynistic. As the New York Times reported, a group of feminist groups sent 5,000 e-mails to advertisers on Facebook which spurred the company to improve its response to hate speech. In explaining the policy on its blog, Facebook wrote:

In recent days, it has become clear that our systems to identify and remove hate speech have failed to work as effectively as we would like, particularly around issues of gender-based hate.

Examples of the types of posts or pages that would be banned are “Violently Raping Your Friend Just for Laughs,” among other lurid and revolting topics. Facebook intends to bolster this new policy with a “real name” requirement for people posting potentially offensive speech. If you want to write something nasty, in other words, you have to own up to it with your real identity.

Few people would argue that posts advocating misogyny deserve a wide readership, but Facebook’s new policy creates another problem. What are the limits on Facebook itself, a corporation which is not bound by the First Amendment? Writing in GigaOm, Matthew Ingram observes that the policy “gives Facebook even more of a license to practice what amounts to censorship—something the company routinely (and legitimately) gets criticized for doing.” Facebook has already been castigated for banning pages that promote breast feeding, for example, and Ingram points out that groups have been labeling posts that promote heterosexuality as hate speech. (Ingram does not identify the groups, but one blogger tried to do so.) Jillian York from the Electronic Frontier Foundation chimed in from her personal account on Twitter: “I believe Facebook and other sites should ONLY remove content when required to do so by law. They go way beyond that.” As for real name registration, netizens are wary that the policy will suppress dissenting speech and make it easier for governments to target activists.

Certainly, Facebook’s recent entry into the multistakeholder Global Network Initiative is a positive sign that the company may be opening its ears to civil society groups, but it’s worth being cautious. At this particular moment in time, the profit motive at social media companies often aligns directly with free speech. But business is a fickle and ephemeral thing. According to Rosen:

As it happens, the big Internet companies have a commercial incentive to pursue precisely that mission. Unless Google, Facebook, Twitter, and other Internet giants draw a hard line on free speech, they will find it more difficult to resist European efforts to transform them from neutral platforms to censors-in-chief for the entire globe.

Listen to an in-depth conversation with Jeffrey Rosen on the Kojo Nnamdi show here and a shorter interview on WNYC’s On the Media here.