Facebook this month published its first Corporate Human Rights Policy (and accompanying blog post) setting out the human rights standards it will “strive to respect” across its platforms. Application of international human rights principles is something advocates inside and outside the company have long called for, so the policy is a welcome development.

That being said, it is a confounding policy. On the one hand, Facebook is offering its clearest commitment to date to support human rights defenders, unequivocally stating that it does not provide direct access to people’s information, and that it will resist demands for access or technical changes to support such demands. On the other hand, Facebook is weakening its own case by explicitly failing to commit to abide by human rights standards when challenged by governments that contravene them: “When faced with conflicts between such laws and our human rights commitments, we seek to honor the principles of internationally recognized human rights to the greatest extent possible. In these circumstances we seek to promote international human rights standards by engaging with governments, and by collaborating with other stakeholders and companies.” The commitment to dialogue is important and admirable. It is also low stakes.

As any policy wonk will say, the distance between policy and implementation is more like a polar expedition than a walk to the fridge. The Facebook Corporate Human Rights Policy has value as a statement of intent and a marker to which Facebook can be held accountable, but it shouldn’t be mistaken as achieving the important goals it describes. As Facebook aptly put it: “[W]e know that we will be judged based on our actions, not our words.” In that spirit, here are the top questions PEN America will be asking as we evaluate Facebook’s progress in meeting its obligations moving forward.

Will Facebook Fund and Internally Coordinate to Make These Commitments Real?

Facebook’s organizational structure can be obscure, even for people within the company. There are a variety of teams that focus on specific thematic issues—like election integrity, human rights, and public safety—alongside teams working directly on product strategy and implementation; many of these have been repeatedly reorganized in recent years. In order to meet its human rights obligations, Facebook will need to mobilize not only the teams that have “human rights” and related terms as part of their job description, but also ensure that these considerations are streamlined into the decision-making process for the teams that are closest to the product itself.

Relatedly, policy implementation requires not only a mandate but funding. It is unclear how the commitments in the policy are tied to real financial investment, either internally or externally. The blog post accompanying the policy makes brief mention of a new “fund to support human rights defenders,” but offers little detail. 

The new policy signals a certain level of support at high levels in the company, including the vice president for global affairs and communications, general counsel, and board—but will that support translate into coordination across the company and the prioritization of human rights as a business and product strategy? When will Facebook’s users feel the difference, if at all?

Will Facebook Take a Stab at Transparency?

The new policy pays welcome attention to transparency, a precondition for accountability. In addition to existing transparency reports, Facebook commits to annual human rights reporting and ad hoc disclosures regarding particular issues as needed. The question this immediately begs is: Just how meaningful will this new transparency be? Will Facebook move beyond the publication of simple roll-up metrics and provide a view into how data and due diligence are resulting in specific changes to policy and practice?

At times, the policy seems to celebrate Facebook’s current transparency measures, which fall well short of principles articulated in the Santa Clara Principles on Transparency and Accountability in Account Moderation (which PEN America endorses) and other formal and informal recommendations that have been made by civil society and the research community for years. As a result, outside researchers attempting to analyze Facebook’s content moderation practices or assess problems like disinformation on its platforms are too often left working on anecdote, negotiating for privileged access, facing threats from Facebook for trying to collect data, or drawing from the data provided publicly by competitors to make semi-informed guesses about what might be happening.

It’s also very welcome that Facebook’s policy specifically “recognize[s] the importance” of the OECD’s Principles on Artificial Intelligence. The third of these principles is that “[t]here should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.” The policy does not address algorithmic transparency or disclosure, although it does cite the work of Facebook’s Responsible AI team, which has been engaged in global policy and research efforts in this area. As elsewhere though, the slippage between “recognition” and action, and between claiming credit for work and engagement to date versus accountability going forward, is the real takeaway.

How Will Facebook’s Commitment to Human Rights Defenders Work?

Facebook’s strong affirmation of its commitment to protect human rights defenders, as defined by the ​UN Declaration on Human Rights Defenders, is one of the highlights of the policy. Facebook has provided public and non-public support for human rights defenders for years, and has at times shown a willingness to provide direct support to activists and NGOs to support urgent needs, such as helping with hacked accounts. These efforts have helped and also frustrated human rights defenders, providing help at critical moments but also seeming confusing, inconsistent, and insufficient to the challenges at hand. Facebook’s clear commitment not to provide direct access (presumably for governments) to its users’ data and to resist demands for the introduction of “back doors” that would allow unilateral direct access is substantive and timely, especially given the democratic crises many countries are facing and cyclical calls to subvert strong encryption on the basis of security interests.

Our questions here again are about scale and process: Will there be a more formalized process for human rights defenders to engage with Facebook? Given the reference to a new fund for human rights defenders in the accompanying blog, what funding will be available and to whom? How will these capacities be developed over time? How will Facebook bring the voices of those most at-risk to the center of their strategy?

Why is Facebook Punting on Remediation?

Facebook’s Human Rights Policy lacks much substance regarding the remediation of human rights impacts. The concept of remediation is that corporations have an ethical obligation to provide redress for human rights harms that happen as a result of their business—not just make changes based on lessons learned. Facebook’s policy cites its code of conduct and whistleblower policies, its content moderation appeals processes, and the existence of the Facebook Oversight Board, but sidesteps the question of whether Facebook thinks these are sufficient.

That puts a lot of weight on the much-heralded Oversight Board. While there are a range of views about the Facebook Oversight Board, and Facebook’s likely incentive for creating it was primarily to ward off regulation, it is indeed a groundbreaking initiative. It is an independent group of experts that will essentially function as an appeals court for the company’s content moderation decisions. The group began hearing cases late last year, and early results seem modest but positive. We at PEN America hope that the Oversight Board will have a positive and long-term human rights impact on Facebook’s policies as well as on the specific content moderation questions under its purview.

But given the lack of other substantive approaches to remediation, we’re left to wonder: Given the tactically narrow jurisdiction of the Oversight Board, how does Facebook intend to identify and remediate impacts of issues beyond content moderation, such as algorithmic amplification of disinformation and hate? How does it think about the risks of its platforms being used to facilitate gross human rights abuses, as they have repeatedly?

Conclusion: Is Facebook Prepared to Move Fast and Fix Things?

How quickly and effectively Facebook can move to meet its human rights obligations and remediate the harm it has caused or enabled globally?

In keeping with its bullish negotiating tactics elsewhere, Facebook devotes an awful lot of the new policy to rehashing its existing investments and policies. The policy paints Facebook as a company that is a de facto human rights leader prepared to solidify its already solid practices. In reality, Facebook faces constant scrutiny for its failures on many of these fronts. Its investment in solving its moderation problems, its commitment to developing features to empower its users, and its engagement with regulatory reform have been inadequate, and it has engaged in highly sophisticated trench warfare against regulation, transparency, or accountability.

We have been treated to transparency reports that are published quarterly but don’t say much, outsourced content moderation practices that themselves raise human rights concerns, and rhetoric from Facebook executives that they simply cannot provide robust protections to each of the countries and communities that they invite to use their platforms, even at their current levels of unprecedented profitability and scale.

All this means that the question is no longer simply what commitments Facebook has made or will make in the future, but how quickly it will translate these commitments into actual, consequential changes to its policy and practices, and how and when Facebook will remediate the damage it has done. It is certainly true that the task, for example, of training effective AI to moderate in hundreds of languages is both urgent and difficult, as Facebook CEO Mark Zuckerberg said at one point in congressional testimony in late March. It is also certainly true that unlike, say, the Biden-Harris administration’s national vaccine rollout, Facebook has offered no publicly stated date or milestones that its engineers are working against or it can be held accountable to.

In short: Is this policy wired up to anything? The world sure hopes so.


Matt Bailey is PEN America’s digital freedom program director.