No Excuse for Abuse

What Social Media Companies Can Do Now to Combat Online Harassment and Empower Users


Online abuse—from violent threats and hateful slurs to sexual harassment, impersonation, and doxing—is a pervasive and growing problem.1PEN America defines online abuse as the “severe or pervasive targeting of an individual or group online with harmful behavior.” PEN America defines doxing as the “publishing of sensitive personal information online—including home address, email, phone number, social security number, photos, etc.—to harass, intimidate, extort, stalk, or steal the identity of a target.” “Defining ‘Online Abuse’: A Glossary of Terms,” Online Harassment Field Manual, accessed January 2021, Nearly half of Americans report having experienced it,2“Online Hate and Harassment Report: The American Experience 2020,” ADL, June 2020,; see also Emily A. Vogels, “The State of Online Harassment,” Pew Research Center, January 13, 2021, and two-thirds say they have witnessed it.3Maeve Duggan, “Online Harassment 2017: Witnessing Online Harassment,” Pew Research Center, July 11, 2017, But not everyone is subjected to the same degree of harassment. Certain groups are disproportionately targeted for their identity and profession. Because writers and journalists conduct so much of their work online and in public, they are especially susceptible to such harassment.4Michelle P. Ferrier, “Attacks and Harassment: The Impact on Female Journalists and Their Reporting (Rep.),” IWMF/TrollBusters, 2018,; “Why journalists use social media,” NewsLab, 2018,,media%20in%20their%20daily%20work.&text=About%2073%20percent%20of%20the,there%20is%20any%20breaking%20news Among writers and journalists, the most targeted are those who identify as women, BIPOC, LGBTQIA+, and/or members of religious or ethnic minorities.5For impact on female journalists internationally, see Ibid. and Julie Posetti et al., “Online violence Against Women Journalists: A Global Snapshot of Incidence and Impacts,” UNESCO, December 1 2020,; For impact on women and gender nonconforming journalists in the U.S. and Canada, see: ; see also Lucy Westcott, “‘The threats follow us home’: Survey details risks for female journalists in U.S., Canada,” CPJ, September 4, 2019,; For impact on women of color, including journalists, see: “Troll Patrol Findings,” Amnesty International, 2018, Online abuse is intended to intimidate and censor. When voices are silenced and expression is chilled, public discourse suffers. By reducing the harmful impact of online harassment, platforms like Twitter, Facebook, and Instagram can ensure that social media becomes more open and equitable for all users. In this report, PEN America proposes concrete, actionable changes that social media companies can and should make immediately to the design of their platforms to protect people from online abuse—without jeopardizing free expression.

The devastating impact of online abuse

Writers and journalists are caught in an increasingly untenable double bind. They often depend on social media platforms—especially Twitter, Facebook, and Instagram—to conduct research, connect with sources, keep up with breaking news, promote and publish their stories, and secure professional opportunities.6“Why journalists use social media,” NewsLab, 2018,,media%20in%20their%20daily%20work.&text=About%2073%20percent%20of%20the,there%20is%20any%20breaking%20news; “2017 Global Social Journalism Study,” Cision, accessed February 19, 2021, Yet their visibility and the very nature of their work—in challenging the status quo, holding the powerful accountable, and sharing analysis and opinions—can make them lightning rods for online abuse, especially if they belong to frequently targeted groups and/or if they cover beats such as feminism, politics, or race.7Gina Massulo Chen et al., “‘You really have to have thick skin’: A cross-cultural perspective on how online harassment influences female journalists,” Journalism 21, no. 7 (2018),“If you’re going to be a journalist, there is an expectation to be on social media. I feel that I have no choice. The number of followers is something employers look at,” says Jami Floyd, senior editor of the Justice and Race Unit at New York Public Radio. “This is unfair, because there are not a lot of resources to protect you. No matter what I say about race, there will be some blowback. Even if I say nothing, when my colleague who is a white man takes positions on racism, trolls come after me on social media.”8Jami Floyd, interview with PEN America, June 6, 2020.

A 2018 study conducted by TrollBusters and the International Women’s Media Foundation (IWMF) found that 63 percent of women media workers in the United States have been threatened or harassed online at least once,9Michelle P. Ferrier, “Attacks and Harassment: The Impact on Female Journalists and Their Reporting,” IWMF/TrollBusters, 2018,; see also Lucy Westcott, “‘The threats follow us home’: Survey details risks for female journalists in U.S., Canada,” CPJ, September 4, 2019,; for global stats, see also: Julie Posetti et al., “Online violence Against Women Journalists: A Global Snapshot of Incidence and Impacts,” UNESCO, December 1 2020,; (73 percent of the female journalists who responded to this global survey said they had experienced online abuse, harassment, threats and attacks.) a number significantly higher than the national average for the general population.10Emily A. Vogels, “The State of Online Harassment,” Pew Research Center, January 13, 2021, Often women are targeted in direct response to their identities.11Women cited disproportionate levels of harassment, including more than three times the gender-based harassment experienced by men (37 percent versus 12 percent). “Online Hate and Harassment Report: The American Experience 2020,” ADL, June 2020, “I am often harassed online when I cover white nationalism and anti-Semitism, especially in politics or when perpetrated by state actors,” says Laura E. Adkins, a journalist and opinion editor of the Jewish Telegraphic Agency. “My face has even been photoshopped into an image of Jews dying in the gas chambers.”12Laura E. Adkins, interview with PEN America, June 15, 2020. Individuals at the intersection of multiple identities, especially women of color, experience the most abuse—by far.13A 2018 study from Amnesty International found that women of color—Black, Asian, Hispanic, and mixed-race women—are 34 percent more likely to be mentioned in abusive or problematic tweets than white women; Black women, specifically, are 84 percent more likely than white women to be mentioned in abusive or problematic tweets. “Troll Patrol Findings,” Amnesty International, 2018,

The consequences are dire. Online abuse strains the mental and physical health of its targets and can lead to stress, anxiety, fear, and depression.14Michelle P. Ferrier, “Attacks and Harassment: The Impact on Female Journalists and Their Reporting,” IWMF/TrollBusters, 2018,; Lucy Westcott, “‘The Threats Follow Us Home’: Survey Details Risks for Female Journalists in U.S., Canada,” Committee to Protect Journalists, September 4, 2019, In extreme cases, it can escalate to physical violence and even murder.15According to a recent global study of female journalists conducted by UNESCO and the International Center for Journalists (ICFJ), 20 percent of respondents reported that the attacks they experienced in the physical world were directly connected with online abuse. Julie Posetti et al., “Online violence Against Women Journalists: A Global Snapshot of Incidence and Impacts,” UNESCO, December 1 2020,; The Committee to Protect Journalists has reported that 40 percent of journalists who are murdered receive threats, including online, before they are killed. Elisabeth Witchel, “Getting Away with Murder,” CPJ, October 31, 2017, Because the risks to health and safety are very real, online abuse has forced some people to censor themselves, avoid certain subjects, step away from social media,16“Online Harassment Survey: Key Findings,” PEN America, accessed September 2020,; Mark Lieberman, “A growing group of journalists has cut back on Twitter, or abandoned it entirely,” Poynter Institute, October 9, 2020,; “Measuring the prevalence of online violence against women,” The Economist Intelligence Unit, accessed March 2021, or leave their professions altogether.17Michelle P. Ferrier, “Attacks and Harassment: The Impact on Female Journalists and Their Reporting,” IWMF/TrollBusters, 2018, Dr. Michelle Ferrier, a journalist who founded the anti-harassment nonprofit TrollBusters after facing relentless racist and sexist abuse online, recalls: “I went to management. I went to the police. I went to the FBI, CIA. The Committee to Protect Journalists took my case to the Department of Justice. Nothing changed. But I did. I changed as a person. I became angrier. More wary and withdrawn. I had police patrolling my neighborhood. I quit my job to protect my family and young children.”18“About us—TrollBusters: Offering Pest Control for Journalists,” TrollBusters, June 2020,

When online abuse drives women, LGBTQIA+, BIPOC, and minority writers and journalists to leave industries that are predominantly male, heteronormative, and white, public discourse becomes less open and less free.19“What Online Harassment Tells Us About Our Newsrooms: From Individuals to Institutions,” Women’s Media Center, 2020, Individual harms have systemic consequences: undermining the advancement of equity and inclusion, constraining press freedom, and chilling free expression.

Shouting into the void: inadequate platform response

Hate and harassment did not begin with the rise of social media. But because sustaining user attention and maximizing engagement underpins the business model of these platforms, they are built to prioritize immediacy, emotional impact, and virality. As a result, they also amplify abusive behavior.20Amit Goldenberg and James J. Gross, “Digital Emotion Contagion,” Harvard Business School, 2020,; Luke Munn, “Angry by design: toxic communication and technical architectures,” Humanities and Social Sciences Communications 7, no. 53 (2020),; Molly Crockett, “How Social Media Amplifies Moral Outrage,” The Eudemonic Project, February 9 2020, In prioritizing engagement over safety, many social media companies  were slow to implement even basic features to address online harassment. When Twitter launched in 2006, users could report abuse only by tracking down and filling out a lengthy form for each individual abusive comment. The platform did not integrate a reporting button into the app until 2013;21Alexander Abad-Santos, “Twitter’s ‘Report Abuse’ Button Is a Good, But Small, First Step,” The Atlantic, July 31, 2013,; Abby Ohlheiser, “The Woman Who Got Jane Austen on British Money Wants To Change How Twitter Handles Abuse,” Yahoo! News, July 28, 2013, it offered a block feature (to limit communications with an abuser) early on, but did not provide a mute feature (to hide abusive comments without alerting and possibly antagonizing the abuser) until 2014.22Paul Rosania, “Another Way to Edit your Twitter Experience: With Mute,” Twitter Blog, May 12, 2014, While Facebook offered integrated reporting, blocking, and unfriending features within several years of its launch in 2004,23“Facebook Customer Service: Abuse,” Wayback Machine, December 2005, accessed March 2021, it has since lagged behind in adding new features designed to address abuse. The platform only enabled users to ignore abusive accounts in direct messages in 2017 and to report abuse on someone else’s behalf in 2018.24Mallory Locklear, “Facebook introduces new tools to fight online harassment,” Engadget, December 19, 2017,; Antigone Davis, “New Tools to Prevent Harassment,” About Facebook, December 19, 2017,; Antigone Davis, “Protecting People from Bullying and Harassment,” About Facebook, October 2, 2018, When it launched in 2010, Instagram also required users to fill out a separate form to report abuse, and its rudimentary safety guidelines advised users to manually delete any harassing comments.25“User Disputes,” WayBack Machine, 2011, accessed February 16, 2021, Since 2016, the platform has gradually ramped up its efforts to address online harassment, pulling ahead of Facebook and Twitter, although Instagram did not actually introduce a mute button until 2018.26Megan McCluskey “Here’s How You Can Mute Someone on Instagram Without Unfollowing Them,” Time, May 22 2018,

All of these features were added only after many women, people of color, religious and ethnic minorities, and LGBTQIA+ people, including journalists and politicians, had endured countless high-profile abuse campaigns and spent years advocating for change, applying pressure, and generating public outrage.27Alexandra Abad-Santos, “Twitter’s ‘Report Abuse’ Button Is a Good, But Small, First Step,” The Atlantic, July 31, 2013,; Amanda Marcotte, “Can These Feminists Fix Twitter’s Harassment Problem?,” Slate, November 7, 2014, This timeline indicates a much broader issue in tech: in an industry that has boasted of its willingness to “move fast and break things,”28Chris Velazco, “Facebook can’t move fast to fix the things it broke,” Engadget, April 12, 2018, efforts to protect vulnerable users are just not moving fast enough.

Users have noticed. According to a 2021 study from Pew Research Center, nearly 80 percent of Americans believe that social media companies are not doing enough to address online harassment.29Emily A. Vogel, “The State of Online Harassment,” Pew Research Center, January 13, 2021, Many of the experts and journalists PEN America consulted for this report concurred. Jaclyn Friedman, a writer and founder of Women, Action & the Media who has advocated with platforms to address abuse, says she often feels like she’s “shouting into a void because there’s no transparency or accountability.”30Jaclyn Friedman, interview with PEN America, May 28, 2020.

A man who did not give his name wears a mask of Facebook CEO Mark Zuckerberg as he and others demonstrate outside of Zuckerberg's home to protest what they say is Facebook spreading disinformation in San Francisco, Saturday, Nov. 21, 2020.
Protest against Facebook’s role in spreading online harm in San Francisco in November 2020. Photo by AP/Jeff Chiu

There is a growing international consensus that the private companies that maintain dominant social media platforms have a responsibility, in accordance with international human rights law and principles, to reduce the harmful impact of abuse on their platforms and ensure that they remain conducive to free expression.31Susan Benesch, “But Facebook’s Not a Country: How to Interpret Human Rights Law for Social Media Companies,” Yale Journal on Regulation Online Bulletin 3 (September 14, 2020), According to the United Nations’ Guiding Principles for Business and Human Rights (UNGPs), corporations must “avoid infringing on the human rights of others and should address adverse human rights impacts with which they are involved.”32“Guiding Principles on Business and Human Rights,” United Nations Human Rights Office of the High Commissioner, 2011, In March 2021, Facebook released a Corporate Human Rights Policy rooted in the UNGPs, which makes an explicit commitment to protecting the safety of human rights defenders, including “professional and citizen journalists” and “members of vulnerable groups advocating for their rights,” from online attacks.33 “Corporate Human Rights Policy,” Facebook, accessed March 2021, The UNGPs further mandate that states must ensure that corporations live up to their obligations, which “requires taking appropriate steps to prevent, investigate, punish and redress such abuse through effective policies, legislation, regulations and adjudication.”34“Guiding Principles on Business and Human Rights,” United Nations Human Rights Office of the High Commissioner, 2011,

Calls to regulate social media—from civil society,35“Toxic Twitter—A Toxic Place for Women,” Amnesty International, 2018,; Eva Galperin and Dia Kayyali, “Abuse and Harassment: What Could Twitter Do?,” Electronic Frontier Foundation, February 20, 2015, legislators,36Davey Alba, “Facebook Must Better Police Online Hate, State Attorneys General Say,” The New York Times, August, 5, 2020, and private companies37Kelly Tyko, “Facebook advertising boycott list: Companies halting ads include Unilever, Coca-Cola, Verizon, Ben & Jerry’s,” USA Today, June 27, 2020,—are mounting. There is also a growing recognition that the design of these platforms—from user experience and product features to the underlying algorithms—are inextricable from the targeted advertising and attention economy that underpins their business models.38Nathalie Maréchal et al., “It’s the Business Model: How Big Tech’s Profit Machine is Distorting the Public Sphere and Threatening Democracy,” Ranking Digital Rights, March-May 2020, Legislative and regulatory solutions are critically important, but they are also fraught, complex, and hard to get right without further undermining the safety and free speech of individuals and communities already struggling to be heard online. These efforts will take time, but immediate action is urgently needed.

What can platforms do now to reduce the burden of online abuse?

In this report, PEN America asks: What can social media companies do now to ensure that users disproportionately impacted by online abuse receive better protection and support? How can social media companies build safer spaces online? How can technology companies, from giants like Facebook and Twitter to small startups, design in-platform features and third-party tools that empower targets of abuse and their allies and disarm abusive users, while preserving free expression? What’s working, what can be improved, and where are the gaps? Our recommendations include both proactive measures that empower users to reduce risk and minimize exposure and reactive measures that facilitate response and alleviate harm; and accountability measures that deter abusive behavior. 

Among our principal recommendations, we propose that social media companies should:

  • Build shields that enable users to proactively filter abusive content (across feeds, threads, comments, replies, direct messages, etc.) and quarantine it in a dashboard, where they can review and address it with the help of trusted allies.
  • Enable users to assemble rapid response teams and delegate account access, so that trusted allies can jump in to provide targeted assistance, from mobilizing supportive communities to helping document, block, mute, and report abuse.
  • Create a documentation feature that allows users to quickly and easily record evidence of abuse—capturing screenshots, hyperlinks, and other publicly available data automatically or with one click—which is critical for communicating with employers, engaging with law enforcement, and pursuing legal action.
  • Create safety modes that make it easier to customize privacy and security settings, visibility snapshots that show how adjusting settings impacts reach, and identities that enable users to draw boundaries between the personal and the professional with just a few clicks.
  • For extreme or overwhelming abuse, create an SOS button that users could activate to instantly trigger additional in-platform protections and an emergency hotline (phone or chat) that provides personalized, trauma-informed support in real time.
  • Create a transparent system of escalating penalties for abusive behavior—including warnings, strikes, nudges, temporary functionality limitations, and suspensions, as well as content takedowns and account bans—and spell out these penalties for users every step of the way.


Our proposals are rooted in the experiences of writers and journalists who identify as women, BIPOC, LGBTQIA+, and/or members of religious or ethnic minorities in the United States, where PEN America’s expertise on online abuse is strongest. We recognize, however, that online abuse is a global problem and endeavor to note the risks and ramifications of applying strategies conceived in and for the United States internationally.39“Activists and tech companies met to talk about online violence against women: here are the takeaways,” Web Foundation, August 10, 2020, We focus on Twitter, Facebook, and Instagram—because United States-based writers and journalists rely on these platforms most in their work,40Michelle P. Ferrier, “Attacks and Harassment: The Impact on Female Journalists and Their Reporting (Rep.),” IWMF/TrollBusters, 2018,; “Why journalists use social media,” NewsLab, 2018,,media%20in%20their%20daily%20work.&text=About%2073%20percent%20of%20the,there%20is%20any%20breaking%20news; “2017 Global Social Journalism Study,” Cision, accessed February 19, 2021, and because it is on these platforms that United States-based users report experiencing the most abuse.41“Online Hate and Harassment Report: The American Experience 2020,” ADL, June 2020,; see also Emily A. Vogels, “The State of Online Harassment,” Pew Research Center, January 13, 2021, But our recommendations are relevant to all technology companies that design products to facilitate communication and social interaction.

We draw a distinction between casual and committed abuse: the former is more organic and plays out primarily among individuals; the latter is premeditated, coordinated, and perpetrated by well-resourced groups. We make the case that technology companies need to better protect and support users facing both day-to-day abuse and rapidly escalating threats and harassment campaigns. While we propose tools and features that can both disarm abusers and empower targets and their allies, we recognize that the lines between abuser, target, and ally are not always clear-cut. In a heavily polarized environment, online abuse can be multidirectional. Though abusive trolls are often thought of as a “vocal and antisocial minority,” researchers at Stanford and Cornell Universities stress that “anyone can become a troll.”42Justin Cheng et al., “Anyone Can Become a Troll: Causes of Trolling Behavior in Online Discussions,” 2017, Research conducted in the gaming industry found that the vast majority of toxicity came not from committed repeat abusers but from regular users “just having a bad day.”43Jeffrey Lin, “Doing Something About The ‘Impossible Problem’ of Abuse in Online Games,” Vox, July 7, 2015, accessed February 16, 2021, a user can be either an abuser or a target at any time, tools and features designed to address online abuse must approach it as a behavior—not an identity.

No single strategy to fight online abuse will be perfect or future-proof. Any tool or feature for mitigating online abuse could have unintended consequences or be used in ways counter to its intended purpose. “You have to design all of these abuse reporting tools with the knowledge that they are going to be misused,” explains Leigh Honeywell, co-founder and CEO of Tall Poppy, a company that provides protection for individuals and institutions online.44Leigh Honeywell, interview with PEN America, May 15, 2020. Ensuring that systems are designed to empower users rather than simply prohibit bad behavior can help mitigate those risks, preserving freedom while also becoming more resilient to evolving threats.

If technology companies are serious about reducing the harm of online abuse, they must prioritize understanding the experiences and meeting the needs of their most targeted users. Every step of the way, platforms need to “center the voices of those who are directly impacted by the outcome of the design process,”45Sasha Costanza-Chock, Design Justice: Community-Led Practices to Build the Worlds We Need (MIT Press, 2020) argues Dr. Sasha Costanza-Chock, associate professor of civic media at MIT. Moreover, to build features and tools that address the needs of vulnerable communities, technology companies need staff, consultation, and testing efforts that reflect the perspectives and experiences of those communities. Staff with a diverse range of identities and backgrounds need to be represented across the organization—among designers, engineers, product managers, trust and safety teams, etc.—and they need to have the power to make decisions and set priorities. If platforms can build better tools and features to protect writers and journalists who identify as women, BIPOC, LGBTQIA+, and members of religious or ethnic minorities, they can better serve all users who experience abuse.

As an organization of writers committed to defending freedom of expression, PEN America views online abuse as a threat to the very principles we fight to uphold. When people stop speaking out and writing about certain topics due to fear of reprisal, everyone loses. Even more troubling, this threat is most acute when people are trying to engage with some of the most complex, controversial, and urgent questions facing our society—questions about politics, race, religion, gender and sexuality, and domestic and international public policy. Democratic structures depend on a robust, healthy discourse in which every member of society can engage. “You can’t have free expression of ideas if people have to worry that they’re going to get doxed or they’re going to get threatened,” notes Mary Anne Franks, president of the Cyber Civil Rights Initiative and professor of law at the University of Miami. “So if we could focus the conversation on how it is that we can create the conditions for free speech—free speech for reporters, free speech for women, free speech for people of color, free speech for people who are targeted offline—that is the conversation we have to have.”46Mary Anne Franks, interview with PEN America, May 22, 2020.

At the same time, we are leery of giving private companies unchecked power to police speech. Contentious, combative, and even offensive views often do not rise to the level of speech that should be banned, removed, or suppressed. Content moderation can be a blunt instrument. Efforts to combat online harassment that rely too heavily on taking down content, especially given the challenges of implicit bias in both human and automated moderation, risk sweeping up legitimate disagreement and critique and may further marginalize the very individuals and communities such measures are meant to protect. A post that calls for violence against a group or individual, for instance, should not be treated the same as a post that might use similar language to decry that very behavior.47Mallory Locklear, “Facebook is still terrible at managing hate speech,” Engadget, August 3, 2017,; Tacey Jan, Elizabeth Dwoskin, “A White Man Called Her Kids the N-Word. Facebook Stopped Her from Sharing it.” The Washington Post, July 31st, 2017, Furthermore, some tools that mitigate abuse can be exploited to silence the marginalized and censor dissenting views. More aggressive policing of content by platforms must be accompanied by stepped-up mechanisms that allow users to appeal and achieve timely resolution in instances where they believe that content has been unjustifiably suppressed or removed. Throughout this report, in laying out our recommendations, we address the tensions that can arise in countering abuse while protecting free expression, and propose strategies to mitigate weaponization and unintended consequences. While the challenges and tensions baked into reducing online harms are real, technology companies have the resources and power to find solutions. Writers, journalists, and other vulnerable users have, for too long, endured relentless abuse on the very social media platforms that they need to do their jobs. It’s time for technology companies to step up.

Empowering Targeted Users and Their Allies

In this section we lay out proactive and reactive measures that platforms can take to empower users targeted by online abuse and their allies. Proactive measures protect users from online abuse before it happens or lessen its impact by giving its targets greater control. Unfortunately proactive measures can sometimes be fraught from a free expression standpoint. Sweeping or sloppy implementation, often rooted in algorithmic and human biases abetted by a lack of transparency, can result in censorship, including of creative and journalistic content.48Scott Edwards, “YouTube removals threaten evidence and the people that provide it,” Amnesty International, November 1, 2017,; Jillian C. York, “Companies Must Be Accountable to All Users: The Story of Egyptian Activist Wael Abbas,” Electronic Frontier Foundation, February 13 2018,; Abdul Rahman Al Jaloud et al., “Caught in the Net: The impact of “extremist” speech regulations on Human Rights content,” Electronic Frontier Foundation, Syrian Archive, and Witness, May 30, 2019, Reactive measures, such as blocking and muting to limit interaction with abusive content, mitigate the harms of online abuse once it is underway but do little to shield targets. Such features sidestep many of the first-order free expression risks associated with proactive measures but are often, on their own, insufficient to protect users from abuse.

It is important to bear in mind that both proactive and reactive measures are themselves susceptible to gaming and weaponization.49Katie Notopoulos, “How Trolls Locked My Twitter Account For 10 Days, And Welp,” BuzzFeed News, December 2, 2017,; Tracey Jan, Elizabeth Dwoskin, “A White Man Called Her Kids the N-Word. Facebook Stopped Her from Sharing it.” The Washington Post, July 31st, 2017,; Russell Brandom, “Facebook’s Report Abuse button has become a tool of global oppression,” The Verge, September 2, 2014, In many cases, the difference between an effective strategy and an ineffective or overly restrictive one depends not only on policies but also on the specifics of how tools and features are designed and whom they prioritize and serve. Our recommendations aim to strike a balance between protecting those who are disproportionately targeted by online abuse for their identity and profession and safeguarding free expression.

Proactive measures: Reducing risk and exposure

Proactive measures are often more effective than reactive ones because they can protect users from encountering abusive content—limiting their stress and trauma and empowering them to express themselves more freely. They can also enable users to reduce their risk and calibrate their potential exposure by, for example, fine-tuning their privacy and security settings and creating distinctions between their personal and professional identities online.

Today most major platforms provide some proactive protections, but these are often difficult to find, understand, and use. Many of the writers and journalists PEN America works with, including those interviewed for this report, were unaware of existing features and tools and found themselves scrambling to deal with online harassment only after it had been unleashed.  “Young journalists,” says Christina Bellantoni, a professor at the USC Annenberg School for Communication and Journalism, often “don’t familiarize themselves with policies and tools because they don’t predict they will ever face problems. When they do, it’s too late. Tools to help young journalists learn more about privacy settings from the outset would go a long way.”50Christina Bellantoni, email to PEN America, January 25, 2021. Social media companies should design and build stronger proactive measures, make them more accessible and user-friendly, and educate users about them.

Safety modes and visibility snapshots: Making it easier to control privacy and security

The challenge: Writers and journalists are especially vulnerable to hacking, impersonation, and other forms of abuse predicated on accessing or exposing private information.51Jeremy Wagstaff, “Journalists, media under attack from hackers: Google researchers,” Reuters, March 28, 2014,; Reporters Committee for Freedom of the Press, “The dangers of journalism include getting doxxed. Here’s what you can do about it,” Poynter Institute, May 19, 2015, To reduce risks like these, users need to be able to easily fine-tune the privacy and security settings on their social media accounts, especially because platforms’ default settings often maximize the public visibility of content.52“Twitter is public by default, and the overwhelming majority of people have public Twitter accounts. Geolocation is off by default.” Email to PEN America from Twitter spokesperson, October 2020; Matthew Keys, “A brief history of Facebook’s ever-changing privacy settings,” Medium, March 21, 2018,

Some platforms have gradually given users more granular control over their settings, which is a positive trend.53Matthew Keys, “A brief history of Facebook’s ever-changing privacy settings,” Medium, March 21, 2018, Providing users with maximum choice and control without overwhelming them is a difficult balancing act.54Kat Lo, interview with PEN America, May 19, 2020; Caroline Sinders, Vandinika Shukla, and Elyse Voegeli. “Trust Through Trickery,” Commonplace, PubPub, January 5, 2021, The usability of these tools is just as important as their sophistication. “Every year —like clockwork —Facebook has responded to criticisms of lackluster security and data exposure by rolling out ‘improvements’ to its privacy offerings,” writes journalist Matthew Keys. “More often than not, Facebook heralds the changes as enabling users to take better control of their data. In reality, the changes lead to confusion and frustration.”55Matthew Keys, “A Brief History of Facebook’s Ever-Changing Privacy Settings,” Medium, March 21, 2018,

Adding to the problem, there is no consistency across platforms in how privacy and security settings work or the language used to describe them. These settings are often buried within apps or separate help centers and are time-consuming and challenging to find and adjust.56Caroline Sinders, Vandinika Shukla, and Elyse Voegeli. “Trust Through Trickery,” Commonplace, PubPub, January 5, 2021,; Michelle Madejski, Maritza Johnson, Steven M Bellovin, “The Failure of Online Social Network Privacy Settings,” Columbia University Computer Science Technical Reports (July 8, 2011), Even “Google’s own engineers,” according to Ars Technica, have been “confused” by its privacy settings.57Kate Cox, “Unredacted Suit Shows Google’s Own Engineers Confused by Privacy Settings,” ArsTechnica, August 25, 2020,

While many writers and journalists want to maximize their visibility and user engagement, if they find themselves in the midst of an onslaught of abuse—or anticipate one—they need to quickly and easily reduce their visibility until the trouble has passed. Because tightening privacy has real trade-offs, understanding the implications of adjusting specific settings is critically important. As journalist Jareen Imam points out, when users find it “confusing to see what is public and what is not,” they struggle to weigh trade-offs and make informed choices.58Jareen Imam, interview with PEN America, August 25, 2020.

Existing features and tools: As some platforms add increasingly granular choices for adjusting settings, they are also experimenting with features to streamline the process. With Twitter’s “protect my tweets” and Instagram’s “private account” features, users can now tighten their privacy with a single click, restricting who can see their content or follow them. But they cannot then customize settings within these privacy modes to maintain at least some visibility and reach.59“About Public and Protected Tweets,” Twitter, accessed September 2020,; “How do I set my Instagram account to private so that only approved followers can see what I share?,” Instagram Help Center, accessed February 19, 2021,[0]=Instagram%20Help&bc[1]=Privacy%20and%20Safety%20Center

Facebook’s settings are notoriously complicated, and users don’t have a one-click option to tighten privacy and security throughout an account.60In India, Facebook introduced a “Profile Picture Guard” feature in 2017 and seems to be experimenting with a new feature that allows users to “Lock my profile,” which means “people they are not friends with will no longer be able to see photos and posts — both historic and new — and zoom into, share and download profile pictures and cover photos.” However, this feature does not yet appear to be available in multiple countries. Manish Singh, “Facebook rolls out feature to help women in India easily lock their accounts,” TechCrunch, May 21, 2020, Its users can proactively choose to limit the visibility of individual posts,61Justin Lafferty, “How to Control who sees your Facebook posts,” Adweek, March 22, 2013, but they cannot make certain types of content, such as a profile photo, private,62“How Do I Add or Change My Facebook Profile Picture?,” Facebook Help Center, accessed January 19, 2021,; “Who Can See My Facebook Profile Picture and Cover Photo?,” Facebook Help Center, accessed January 19, 2021, which can result in the misuse of profile photos for impersonation or non-consensual intimate imagery.63Woodrow Hartzog and Evan Selinger, “Facebook’s Failure to End ‘Public by Default’,” Medium, November 7, 2018, The platform does offer user-friendly, interactive privacy and security checkups.64Germain, T., “How to Use Facebook Privacy Settings”, Consumer Reports, October 7, 2020,; Matthew Keys, “A Brief History of Facebook’s Ever-Changing Privacy Settings,” Medium, March 21, 2018,; “Safety Center,” Facebook, accessed December, 2020,

In trying to comprehend byzantine settings, some users have turned to external sources. Media outlets and nonprofits, including PEN America, offer writers and journalists training and guidance on tightening privacy and security on social media platforms.65PEN America and Freedom of the Press Foundation offer hands-on social media privacy and security training. See also: “Online Harassment Field Manual,” PEN America, accessed November 16th, 2020,; Viktorya Vilk, “What to do if you’re the target of online harassment,” Slate, June 3, 2020,; Kozinski Kristen and Neena Kapur, “How to Dox Yourself on the Internet,” The New York Times, February 27, 2020, Third-party tools such as Jumbo and Tall Poppy walk users through adjusting settings step by step.66Casey Newton, “Jumbo is a powerful privacy assistant for iOS that cleans up your social profiles,” The Verge, April 9, 2019,; “Product: Personal digital safety for everyone at work,” Tall Poppy, accessed September, 2020, While external tools and training are useful and badly needed, few writers, journalists, and publishers currently have the resources or awareness to take advantage of them.67Jennifer R. Henrichsen et al., “Building Digital Safety For Journalism: A survey of selected issues,” UNESCO, 2015, (“Digital security training programs for human rights defenders and journalists are increasing. However, approximately 54 percent of 167 respondents to the survey for this report said they had not received digital security training.”) Moreover, the very existence of such tools and training is indicative of the difficulty of navigating privacy and security within the platforms themselves.

Recommendations: Platforms should provide users with robust, intuitive, user-friendly tools to control their privacy and security settings. Specifically, platforms should:

  • Empower users to create and save “safety modes”—multiple, distinct configurations of privacy and security settings that they can then quickly activate with one click when needed.
    • Twitter and Instagram should give users the option to fine-tune existing safety modes (“protect my tweets” and “private account,” respectively) after users activate them. These modes are currently limited in functionality because they are binary (i.e., the account is either private or not).
    • Facebook should introduce a safety mode that allows users to go private with just one click, as Twitter and Instagram have already done, while also ensuring that users can then fine-tune specific settings in the new safety mode.
  • Introduce “visibility snapshots” that clearly communicate to users, in real time, the implications of the changes they are making as they adjust their security and privacy settings. One solution is to provide users with a snapshot of what is publicly visible, as Facebook does with its “view as” feature.68“How can I see what my profile looks like to people on Facebook I’m not friends with?,” Facebook Help Center, accessed February 19, 2021, Another is to provide an estimate of how many or which types of users (followers, public, etc.) will be able to see a post depending on selected settings.
    • Twitter and Instagram should add user-friendly, interactive privacy and security checkups, as Facebook has already done, and introduce visibility snapshots.
    • Facebook should enable users to make profile photos private.
  • Regularly prompt users, via nudges and reminders, to review their security and privacy settings and set up the safety modes detailed above. Prompts could proactively encourage users to reconsider including private information that could put them at risk (such as a date of birth or home address).
  • Convene a multi-stakeholder coalition of technology companies, civil society organizations, and vulnerable users—or deploy a specific existing coalition such as the Global Network Initiative,69“Global Network Initiative,” Global Network Initiative, Freedom of Expression and Privacy, July 26, 2020, Online Abuse Coalition,70“Coalition on Online Abuse,” International Women’s Media Foundation, or Trust & Safety Professional Association71“Overview,” Trust and Safety Professional Association, 2021, coordinate consistent user experiences and terminology for account security and privacy across platforms.

Identities: Distinguishing between the personal and the professional

The challenge:  For many writers and journalists, having a presence on social media is a professional necessity.72“2017 Global Social Journalism Study,” Cision, accessed February 19, 2021, Yet the boundaries between the personal and professional use of social media accounts are often blurred. The importance of engaging with an audience and building a brand encourages the conflation of the professional with the personal.73Cara Brems et al., “Personal Branding on Twitter How Employed and Freelance Journalists Stage Themselves on Social Media,” Digital Journalism 5, no. 4 (May 3, 2016), As journalist Allegra Hobbs wrote in The Guardian: “All the things that invite derision for influencers—self-promotion, fishing for likes, posting about the minutiae of your life for relatability points—are also integral to the career of a writer online.”74Allegra Hobbs, “The journalist as influencer: how we sell ourselves on social media,” The Guardian, October 21, 2019, A 2017 analysis of how journalists use Twitter found that they “particularly struggle with” when to be “personal or professional, how to balance broadcasting their message with engagement and how to promote themselves strategically.”75Cara Brems et al., “Personal Branding on Twitter How Employed and Freelance Journalists Stage Themselves on Social Media,” Digital Journalism 5, no. 4 (May 3, 2016), While writers and reporters may be mindful of the need for privacy, the challenge, as freelance journalist Eileen Truax explains, is that maintaining a social media presence paves the way for professional opportunities: “Many of the invitations I get to participate in projects come to me because they see my activity on Twitter.”76Eileen Truax, interview with PEN America, May 25, 2020.

This fusion of the personal and professional makes writers and journalists vulnerable. Private information found on social media platforms is weaponized to humiliate, discredit, and intimidate users, their friends, and their families. To mitigate risk, Jason Reich, vice president for corporate security at The New York Times, advises journalists to create distinct personal and professional accounts on social media wherever possible, fine-tune privacy and security settings accordingly, and adjust the information they include for each account.77Jason Reich, interview with PEN America, June 9, 2020. But following such procedures is challenging because platforms make it difficult to distinguish between personal and professional accounts, to migrate or share audiences between them, and to target specific audiences. While users can theoretically create and manage multiple accounts on most platforms, in practice a user who decides to create a professional account or page separate from an existing personal one has to start over to rebuild an audience.78Avery E Holton, Logan Molyneux, “Identity Lost? The personal impact of brand journalism,” SAGE 18, no. 2 (November 3, 2015): 195-210,

The COVID-19 pandemic—which has pushed more creative and media professionals into remote and fully digital work—has intensified this dilemma.79Bernard Marr, “How The COVID-19 Pandemic Is Fast-Tracking Digital Transformation In Companies,” Forbes, May 17, 2020,; Max Willens, “‘But I’m still on deadline’: How remote work is affecting newsrooms,” Digiday, March 17, 2020, “My Instagram is probably the most personal account that I have,” says Jareen Imam, director of social newsgathering at NBC News. “And actually, for a long time, it was a private account. But because of the pandemic, it’s impossible to do reporting without having this public. When you have a private account, you also close yourself off to sources that might want to reach out to tell you something really important.… So it’s public now.”80Jareen Imam, interview with PEN America, August 25, 2020.

Existing features and tools: Twitter81“How to Manage Multiple Accounts,” Twitter, accessed September 28, 2020, and Instagram82Gannon Burgett, “How to Manage Multiple Instagram Accounts,” Digital Trends, May 17, 2019, allow individual users to create multiple accounts and toggle easily between them. Facebook, on the other hand, does not allow one user to create more than one account83“Can I Create Multiple Facebook Accounts?,” Facebook Help Center, accessed January 19, 2021,; “Can I Create a Joint Facebook Account or Share a Facebook Account with Someone Else?,” Facebook Help Center, accessed January 19, 2021, and requires the use of an “authentic name”—that is, the real name that a user is known by offline.84“What names are allowed on Facebook?” Facebook Help Center, accessed February 8, 2021, While Facebook enables users to create “fan pages” and “public figure pages,”85“Can I Create Multiple Facebook Accounts?,” Facebook, accessed August 2020, these have real limitations: By prioritizing the posts of friends and family in users’ feeds, Facebook favors the personal over the professional, curbing the reach of public-facing pages and creating an incentive to invest in personal profiles.86“Generally, posts from personal profiles will reach more people because we prioritize friends and family content and because posts with robust discussion also get prioritized—posts from Pages or public figures very broadly get less reach than posts from profiles.” Email response from Facebook spokesperson, January 21, 2021; Adam Mosseri, “Facebook for Business,” Facebook, January 11, 2018,; Mike Isaac, “Facebook Overhauls News Feed to focus on what Friends and Family Share,” The New York Times, January 11, 2018,

All of the platforms analyzed in this paper are gradually giving users more control over their audience. Twitter is testing a feature that allow users to specify who can reply to their tweets87Suzanne Xie, “Testing, testing… new conversation settings,” Twitter, May 20, 2020,,%20you’ll,or%20only%20people%20you%20mention.&text=People%20who%20can’t%20reply,Comment,%20and%20like%20these%20Tweets. and recently launched a feature that allows users to hide specific replies.88Brittany Roston, “Twitter finally adds the option to publicly hide tweets,” SlashGear, November 21, 2019,,%20tap,who%20shared%20the%20hidden%20tweet. Facebook gives users more control over the visibility of individual posts, allowing users to choose among “public,” “friends,” or “specific friends.”89What audiences can I choose from when I share on Facebook?,” Facebook, accessed November 30, 2020, Instagram has a feature that lets users create customized groups of “close friends” and share stories in a more targeted way, though it has not yet expanded that feature to posts.90Arielle Pardes, “Instagram Now Lets You Share Pics with Just ‘Close Friends’,” Wired, November 30, 2018, But none of these platforms allow individual users to share or migrate friends and followers among multiple accounts or between profiles and pages.91Email response from Facebook spokesperson, January 21, 2021; Email response from Instagram spokesperson, January 15, 2021; Email response from Twitter spokesperson, October 30, 2020.

Recommendations: Platforms should make it easier for users to create and maintain boundaries between their personal and professional identities online while retaining the audiences that they have cultivated. There are multiple ways to achieve this:

  • Enable users to create distinct personal and professional identities, which could coexist within a single account or exist as multiple accounts. Users should be able to toggle easily between identities, adjust privacy and safety settings for each (see “Safety modes,” above), and—crucially—migrate or share audiences between identities.
  • Give users greater control over who can see their individual posts (i.e., friends/followers versus subsets of friends/followers versus the wider public), which is predicated on the ability to group audiences and target individual posts to subsets of audiences. This is distinct from giving users the ability to go private across an entire account (see “Safety modes,” above).
    • Like Twitter and Instagram, Facebook should make it possible for users to create multiple accounts and toggle easily between them, a fundamental, urgently needed shift from its current “one identity” approach. Facebook should also ensure that public figure and fan pages offer audience engagement and reach that are comparable to those of personal profiles.
    • Like Facebook, Twitter and Instagram should make it easier for users to specify who can see their posts and allow users to migrate or share audiences between personal and professional online identities.


Mitigating risk: While most individual users are entitled to exert control over who can see and interact with their content, for public officials and entities on social media, transparency and accountability are paramount. The courts recently asserted, for example, that during Donald Trump’s presidency, it was unconstitutional for him to block users on Twitter because he was using his “presidential account” for official communications.92Knight First Amendment Inst. at Columbia Univ. v. Trump, No. 1:17-cv-5205 (S.D.N.Y. 2018) There are multiple related cases currently winding their way through the courts, including the ACLU’s lawsuit against state Senator Roy Scott of Colorado for blocking a constituent on Twitter.93“ACLU Sues Colorado State Senator for Blocking Constituent on Social Media,” ACLU of Colorado, June 11, 2019, It is especially important that public officials and entities be required to uphold the boundaries between the personal and professional use of social media accounts and ensure that any accounts used to communicate professionally remain open to all constituents. Public officials and entities must also adhere to all relevant laws for record keeping in official, public communications, including on social media.

A computer screen is surrounded by post it notes that have computer passwords and safety question answers scribbled on them.
Fine-tuning privacy and security settings on social media is critical to reducing the risk of hacking, impersonation, doxing, and other forms of abuse predicated on accessing or exposing private information.
A man wears a t-shirt with all his personal information printed on the front.
Photos by Bronney Hui

Account histories: Managing old content

The challenge: Many writers and journalists have been on social media for over a decade.94Ruth A. Harper, “The Social Media Revolution: Exploring the Impact on Journalism and News Media Organizations,” Inquiries Journal 2, no. 3, (2010), They joined in the early days, when platforms like Facebook were used primarily in personal life and privacy settings often defaulted to “public” and were not granular or easily accessible.95Matthew Keys, “A brief history of Facebook’s ever-changing privacy settings,” Medium, March 21, 2018, But the ways that creative and media professionals use these platforms has since broadened in scale, scope, and reach. Writers’ and journalists’ long histories of online activity can be mined for old posts that, when resurfaced and taken out of context, can be deployed to try to shame a target or get them reprimanded or fired.96Kenneth P. Vogel, Jeremy W. Peters, “Trump Allies Target Journalists Over Coverage Deemed Hostile to White House,” The New York Times, August 25, 2019,; Aja Romano, “The ‘controversy’ over journalist Sarah Jeong joining the New York Times, explained,” Vox, August 3, 2018,

Existing features and tools: On Twitter and Instagram, users can delete content piecemeal and cannot easily search through or sort old content, which is cumbersome and impractical.97Abby Ohlheiser, “There’s no good reason to keep old tweets online. Here’s how to delete them,” The Washington Post, July 30, 2018,; David Nield, “How to Clean Up Your Old Social Media Posts,” Wired, June 14, 2020, In June 2020, Facebook launched “manage activity,” a feature that allows users to filter and review old posts by date or in relation to a particular person and to archive or delete posts individually or in bulk.98“Introducing Manage Activity,” Facebook, June 2, 2020, Manage activity is an important and useful new feature, but it does not allow users to search by keywords and remains difficult to find. There are multiple third-party tools that allow users to search through and delete old tweets and posts en masse;99In interviews, PEN America journalists and safety experts mentioned Tweetdelete and Tweetdeleter. Additional third-party tools include Semiphemeral, Twitwipe, and Tweeteraser for Twitter and InstaClean for Instagram. however, some of them cost money and most require granting third-party access to sensitive accounts, which poses its own safety risks depending on the cybersecurity and privacy practices (and ethics) of the developers.

Recommendations: Platforms should provide users with integrated and robust features to manage their personal account histories, including the ability to search through old posts, review them, make them private, delete them, and archive them—individually and in bulk. Specifically:

  • Twitter and Instagram should integrate a feature that allows users to search, review, make private, delete, and archive old content—individually and in bulk.
  • Facebook should expand its new manage activity feature to enable users to search by keywords and should make this feature more visible and easier to access.

Mitigating risk:
PEN America believes that users should have control over their own social media account histories. Users already have the ability to delete old content on most platforms and via multiple third-party tools. But giving users the ability to purge account histories, especially in bulk, does have drawbacks. Abusers can delete old posts that would otherwise serve as evidence in cases of harassment, stalking, or other online harms. And by removing old content, public officials and entities using social media accounts in their official capacities may undermine accountability and transparency. There are ways to mitigate these drawbacks. It is vital that people facing online abuse are able to capture evidence of harmful content, which is needed for engaging law enforcement, pursuing legal action, and escalating cases with the platforms. For that reason, this report advocates for a documentation feature that would make it easier for targets to quickly and easily preserve evidence of abuse (see “Documentation,” below). In the case of public officials or entities deleting account histories, tools that archive the internet, such as the Wayback Machine, are critically important resources for investigative journalism.100“Politwoops: Explore the Tweets They Didn’t Want You to See,” Propublica,; Valentina De Marval, Bruno Scelza, “Did Bolivia’s Interim President Delete Anti-Indigenous Tweets?,” AFP Fact Check, November 21, 2019, There are also federal laws—most centrally the Freedom of Information Act—and state-level laws that require public officials to retain records that may be disclosed to the public; these records include their statements made on social media.1015 U.S.C § 552; “Digital Media Policy,” Department of the Interior, accessed February 19, 2021, Public officials and entities using social media accounts in their official capacities must adhere to applicable record retention laws, which should apply to social media as they do to other forms of communication.

Rapid response teams and delegated access: Facilitating allyship

The challenge: Online abuse isolates its targets. A 2018 global study from TrollBusters and the IWMF found that 35 percent of women and nonbinary journalists who had experienced threats or harassment reported “feeling distant or cut off from other people.”102Michelle Ferrier, “Attacks and Harassment: The Impact on Female Journalists and Their Reporting,” TrollBusters and International Women’s Media Foundation, September 13, 2018, Many people targeted by online abuse suffer in silence because of the stigma, shame, and victim blaming surrounding all forms of harassment.103Angie Kennedy, Kristen Prock, “I Still Feel Like I Am Not Normal,” Trauma Violence & Abuse 19, no. 5, (December 2, 2018),”I_Still_Feel_Like_I_Am_Not_Normal”_A_Review_of_the_Role_of_Stigma_and_Stigmatization_Among_Female_Survivors_of_Child_Sexual_Abuse_Sexual_Assault_and_Intimate_Partner_Violence Often targets have no choice but to engage with hateful or harassing content—in order to monitor, mute, report, and document it—which can be overwhelming, exhausting, and traumatizing.104Erin Carson, “This is your brain on hate,” CNET, July 8, 2017,

Many of the writers and journalists in PEN America’s network emphasize the importance of receiving support from others  in recovering from episodes of online abuse. Jordan, a blogger who requested to be identified only by their first name, explained: “Other people in my friend circle have to take into account that, being Black and queer, I get more negativity than they would. If they’re white or cisgender or heteronormative, they’ll come back and say, ‘You know what? Jordan is getting a lot of flack, so let’s step up to the plate.’”105“Story of Survival: Jordan,” PEN America Online Harassment Field Manual, October 31, 2017,

Existing features and tools: Users can help one another report abuse on Twitter,106“Report abusive behavior,” Twitter, accessed October 2020, Facebook,107“How to Report Things,” Facebook, accessed October 2020, and Instagram.108“Abuse and Spam,” Instagram, accessed October 2020,,Guidelines%20from%20within%20the%20app But for allies to offer more extensive support—such as blocking or checking direct messages (DMs) on a target’s behalf—they need to have access to the target’s account. In-platform features that securely facilitate allyship are rare, and those that exist were not specifically designed for this purpose. As a result, many targets of online abuse either struggle on their own or resort to ad hoc strategies, such as handing over passwords to allies,109Jillian C. York, “For Bloggers at Risk: Creating a Contingency Plan,” Electronic Frontier Foundation, December 21, 2011, which undermines their cybersecurity at precisely the moment when they are most vulnerable to attacks.

On Facebook, the owner of a public page can grant other users “admin” privileges, but this feature is not available for personal Facebook profiles.110“How do I manage roles for my Facebook page?,” Facebook, accessed October 2020, Similarly, Instagram allows users to share access and designate “roles,” but only on business accounts.111“Manage Roles on a Shared Instagram Account,” Instagram, accessed December 2020, Twitter comes closest to supporting delegated access with its “teams” feature in TweetDeck, letting users share access to a single account without using the same password and be granted owner, admin, or contributor status.112“How to use the Teams feature on Tweetdeck,” Twitter, accessed October 2020,

While useful, these features were designed to facilitate professional productivity and collaboration and meant primarily for institutional accounts or pages.113Sarah Perez, “Twitter enables account sharing in its mobile app, powered by Tweetdeck Teams,” TechCrunch, September 8, 2017, (“This change will make it easier for those who run social media accounts for businesses and brands to post updates, check replies, send direct messages and more, without having to run a separate app.”) The reality is that, like many users, writers and journalists use social media accounts for both personal and professional purposes and need integrated support mechanisms designed specifically to deal with online abuse. Facebook offers a feature that enables users to proactively select a limited number of trusted friends to help them if they get locked out of their account.114“How can I contact the friends I’ve chosen as trusted contacts to get back into my Facebook account?,” Facebook, accessed October 2020, If the company adapted this feature to allow users to proactively select several trusted friends to serve as a rapid response team during episodes of abuse—and added this feature to its new “registration for journalists”115“Register as a Journalist with Facebook,” Facebook Business Help Center, accessed January 20, 2021,—it could serve as an example for other platforms.

In PEN America’s trainings and resources, we advise writers and journalists to proactively designate a rapid response team—a small network of trusted allies—who can be called upon to rally broader support and provide specific assistance, such as account monitoring or temporary housing in the event of doxing or threats.116“Deploying Supportive Cyber Communities,” PEN America Online Harassment Field Manual, accessed February 2021, Several third-party tools and networks are trying to fill the glaring gap in peer support. As Lu Ortiz, founder and executive director of the anti-harassment nonprofit Vita Activa, explains: “Peer support groups are revolutionary because they destigmatize the process of asking for help, provide solidarity, and generate resilience and strategic decision making.”117Lu Ortiz, email to PEN America, January 21, 2021 The anti-harassment nonprofit TrollBusters coordinates informal, organic support networks for journalists.118Michelle Ferrier, interview with PEN America, February 12, 2021. The anti-harassment nonprofit Hollaback! has developed a platform called HeartMob that provides the targets of abuse with support and resources from a community of volunteers.119“About HeartMob,” HeartMob, accessed December 2020, “Our goal,” says co-founder Emily May, “is to reduce trauma for people being harassed online by giving them the immediate support they need.”120Emily May, email to PEN America, January 25, 2021. Block Party, a tool currently in beta for Twitter, gives users the ability to assign “helpers” to assist with monitoring, muting, or blocking abuse.121“Frequently asked questions,” Block Party, accessed October 2020, (“When you add a Helper, you can set their permissions to be able to view only, flag accounts, or even mute and block on your behalf. Mute and block actions apply directly to your Twitter account, but Helpers can’t post tweets from your Twitter account nor can they access or send direct messages.”) Squadbox lets users designate a “squad” of supporters to directly receive and manage abusive content in email in-boxes.122Katilin Mahar, Amy X. Zhang, David Karger, “Squadbox: A Tool to Combat Email Harassment Using Friendsourced Moderation,” CHI 2018: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, no. 586 (April 21, 2018): 1-13, doi/10.1145/3173574.3174160; Haystack Group, accessed October 2020, These tools and communities provide models for how platforms could integrate peer support.

Recommendations: Platforms should add new features and integrate third-party tools that facilitate peer support and allyship. Specifically, platforms should:

  • Enable users to proactively designate a limited number of trusted allies to serve as a rapid response team—a small network of trusted allies who can be called upon to work together or individually to monitor, report, and document abuse that is publicly visible and rally a broader online community to help.
  • Offer users the ability to grant specific members of their rapid response team access to their accounts, akin to the “delegate” system available on Gmail.123“Set Up Mail Delegation—Gmail Help,” Google, accessed January 5, 2021, These delegates could assist with tasks that require direct account access, such as blocking, muting, and reporting abuse in DMs. Users should be able to control the level of access their delegates have (to public feeds versus private DMs, for example).
    • Twitter should integrate its “teams” feature, which is currently available only through TweetDeck, more directly into the primary user experience and empower users to specify exactly which anti-harassment features (monitoring, blocking, muting, reporting, etc.) their delegates can access.
    • Instagram should extend its “roles” feature from business accounts to all accounts.
    • Facebook should extend admin privileges from pages to profiles.
  • Periodically nudge users to create rapid response teams and assign delegates, in tandem with security checkups, potentially when a user reaches a certain follower threshold or immediately after the user has reported online abuse.

Shield and dashboard: Treating online abuse like spam

The challenge: Regularly interacting with hate and harassment is harmful, and users need to have greater control over their exposure to it. As Larry Rosen, professor emeritus of psychology at California State University, explained to Consumer Reports: “You’re going to start feeling more negative, maybe depressed, more stressed, more anxious. The advice I’d give is to identify where the negative stuff is coming from and hide it all.”124Thomas Germain, “How to Filter Hate Speech, Hoaxes, and Violent Speech Out of Your Social Feeds,” August 13, 2020,

Lessons can be learned from the world of email, where managing and metering spam have been a qualified success. In her book The Internet of Garbage, technologist and journalist Sarah Jeong makes the connection between spam and the “garbage” of harassment. “Dealing with garbage is time-consuming and emotionally taxing,” she writes. And while “patterning harassment directly after anti-spam is not the answer,” there are “obvious parallels.”125Sarah Jeong, The Internet of Garbage (Vox Media, Inc., 2018), Taking inspiration from efforts to reduce the volume and visibility of spam, platforms can do more to proactively identify online abuse, filter it, and hide it—from the feeds, notifications, and DMs, of individual users.

Most major social media platforms already rely on a combination of automation and human moderation to proactively identify certain kinds of harmful content in order to reduce its reach, label it, hide it behind screens, or delete it altogether—for all users.126Kat Lo, “Toolkit for Civil Society and Moderation Inventor,” Meedan, November 18, 2020, The challenge with online abuse, however, is that it is heavily context-dependent and can often fall into gray areas that both computers and humans have difficulty adjudicating. Shielding individual users from abusive “garbage” and giving them greater control over whether and how they interact with it can provide an alternative to overzealous proactive content moderation, which can severely undermine free expression for all users.

Existing features and tools: The technology to more accurately and effectively identify and filter context-dependent harmful content is currently being built. Twitter,127“How to use advanced muting options,” Twitter, accessed October 2020, Facebook,128“How do I Mute or Unmute a Story on Facebook?,” Facebook, accessed September 2020, and Instagram129Alex Kantrowitz, “Instagram Rolls Out Custom And Default Keyword Filtering To Combat Harassment,” Buzzfeed News, September 12, 2016, already allow users to hide, mute, and filter some content in feeds, messages, and notifications, but these features are fairly limited in functionality, largely reactive, and rarely quarantine content.

Tools such as Perspective,130“Perspective API, which uses machine learning to identify toxic language, is used to give feedback to commenters, help moderators more easily review comments, and keep conversations open online.” Email to PEN America from Jigsaw, February 2, 2021; Perspective, accessed October 2020, Coral,131Coral by Vox Media, accessed December 2020, L1ght,132“FAQ,” L1ght, accessed September, 2020, and Sentropy133John Redgrave & Taylor Rhyne (Founders, Sentropy), interview with PEN America, June 17, 2020; “Our Mission Is to Protect Digital Communities,” About, Sentropy Technologies, Inc., accessed January, 2021, use machine learning to proactively identify and filter harassment at scale. Many newsrooms and other publishers use Coral and Perspective, for example, to automatically identify and quarantine “toxic” content in the comment sections beneath articles, which human content moderators can then evaluate.134Coral by Vox Media, accessed December 2020,; “About the API,” Perspective, accessed August 2020,

Third-party tools that help individual users (rather than institutions) filter and hide abuse—among them Tune, Block Party, Sentropy Protect, and BodyGuard—have also emerged in recent years. Jigsaw’s Tune is an experimental web browser extension that aims to use machine learning to allow users to adjust the toxicity level of the content they interact with, including content on Facebook and Twitter.135“Tune (Experimental), Chrome Web Store, accessed September, 2020,,in%20comments%20across%20the%20internet Block Party, currently available on Twitter with the goal of expanding to other platforms, aims to proactively identify potentially abusive accounts, automatically block or mute them, and silo related content; users can then choose to review the accounts, report them, and/or unblock and unmute them.136Ingrid Lunden, “Sentropy launches tool for people to protect themselves from social media abuse, starting with Twitter,” February 9, 2021, ; “Stanford grad creates Block Party app to filter out Twitter trolls,” ABC7 KGO, January 29, 2021, Block Party founder Tracy Chou experienced egregious online abuse as a woman of color in tech and created the tool to serve as a “spam folder.”137Shannon Bond, “Block Party Aims To Be A ‘Spam Folder’ For Social Media Harassment,” NPR, February 23, 2021, She explains: “We need to know what people are saying, we need to collect it, we need to keep an eye on it, but we also need to stop seeing it if we want to preserve ourselves.”138Tracy Chou (Block Party), interview with PEN America, August 18, 2020. Both Sentropy Protect and BodyGuard use machine learning to proactively identify and silo abusive content, which users can then review and address; the latter is available for multiple languages on Twitter, Twitch, Instagram, and YouTube.139“Frequently asked questions,” BodyGuard, accessed October 2020,; Matthieu Boutard (BodyGuard), interview with PEN America, June 26, 2020.

Recommendation: Platforms should create a shield that enables users to proactively filter abusive content (across feeds, threads, comments, replies, direct messages, etc.) and quarantine it in a dashboard, where users could then review and address the content as needed, with the help of trusted allies.

How would the shield work?

  • The shield would proactively identify abusive content, filter it out (by hiding it from the the targeted user but not from all users), and automatically quarantine it in the dashboard (see below).
  • Users could turn on the shield with just one click from within the platform’s primary user experience.
  • Users could fine-tune the shield to adjust the toxicity level of content they filter out.
  • Users would receive prompts to turn on or fine-tune the shield when the platform detects unusual account activity.

How would the dashboard work? From within the dashboard, users would be able to:

  • Review quarantined content to block, mute, or report it—and related accounts—including in bulk. Content should be blurred by default, with the option of revealing it for review. Ideally, content should also be labeled with the relevant abusive tactic (hate, slur, threat, doxing, etc.) to help users and their allies prioritize what to review.
  • Manually add abusive content to the dashboard that was missed by the shield.
  • Manually release from the dashboard content that was mistakenly filtered as abusive or that the user does not perceive as abusive.
  • Document the abuse (see “Documentation,” below).
  • Activate rapid response teams to provide peer support, including giving trusted allies delegated access to the dashboard to help manage abuse (see “Rapid response teams and delegated access,” above).
  • Access account privacy and security settings (see “Safety modes,” above).
  • Access account history management tools (see “Managing account histories,” above).
  • Access personalized support through an SOS button and emergency hotline (see “Creating an SOS button and emergency hotline,” below).
  • Access external resources such as mental health hotlines, legal counseling, cybersecurity help, and other direct support.


Mitigating risk: The automated filtering of harmful content is an imperfect science—with false positives, rapidly evolving and coded forms of abuse and hate, and challenges analyzing symbols and images.140Nicolas Kayser-Bril, “Automated moderation tool from Google rates People of Color and gays as ‘toxic,’” Algorithm Watch,; Ilan Price et al, “The Challenge of Identifying Subtle Forms of Toxicity Online,” Medium, December 12, 2018, Platforms should work more closely with one another, with companies that build third-party tools, and with civil society to create and maintain a shared taxonomy of abusive tactics, terms, symbols, etc., and to create publicly available data sets and heuristics for independent review.141Michele Banko, Brendon MacKeen, and Laurie Ray, “A Unified Taxonomy for Harmful Content,” Sentropy Technologies, 2020, (This paper from researchers at Sentropy provides a solid foundation for a shared taxonomy.)

Reactive measures: Facilitating response and reducing harm

Platforms currently offer considerably more options for reacting to online abuse than for proactive protection. Most platforms offer some form of blocking, which is intended to cut off further communication between abuser and target. Nearly all of them also offer some form of muting, which hides abuse from its intended target but not from other users. These mechanisms are vitally important, but they are also inconsistent, insufficient, and inherently limited. Blocking, for example, can be taken as a provocation and escalate abuse, while muting can mask serious threats. No platform analyzed in this report provides a mechanism to track or document abusive content. Most platforms enable users to report abuse, but there is widespread consensus that content moderation systems in general—and reporting features specifically—are often ineffective and cannot adequately keep up with the volume of abuse proliferating on platforms.

Documentation: Recording evidence of abuse

The challenge: All users, including writers and journalists, need to have the ability to maintain a detailed, exportable record of the online abuse they’ve been subjected to. Documentation serves as a prerequisite for engaging with law enforcement or pursuing legal action. It can also help targets track online abuse and facilitate communication with allies and employers. The pressing need for a documentation feature is underscored by the fact that targets can lose evidence if, for instance, an abuser deliberately deletes the content as soon as it has been seen or if the content is reported, determined to be a violation of platform policies, and removed.142Anna Goldfarb, “Expert Advice on How to Deal with Online Harassment,” Vice, March 19, 2018,; “Documenting Online Harassment,” PEN America Online Harassment Field Manual, accessed February 2021,

In 2015, the Electronic Frontier Foundation (EFF) advised digital platforms to build “tools that allow targets of harassment to preserve evidence in a way that law enforcement can understand and use. Abuse reports are currently designed for internet companies’ internal processes, not the legal system.”143Danny O’Brien and Dia Kayyali, “Facing the Challenge of Online Harassment,” Electronic Frontier Foundation, January 8, 2015, According to lawyer Carrie Goldberg, who founded a law firm specializing in defending the targets of online abuse, a documentation feature that provides evidence that could be used in court in the United States should record “screenshots of abusive content, URLs, the social media platform on which abuse occurred, the abuser’s username/handle and basic account info, a time and date stamp, and the target’s username/handle.”144Carrie Goldberg, interview by PEN America, May 26, 2020. Ideally this information would be encrypted where sensible to protect the privacy of all parties, as well as to protect the integrity of the records for use in court.

A documentation feature needs to have a simple interface that is easily navigable by non-expert users who may be traumatized or in duress. “Imagine if you could select [abusive] mentions and just put them in a report,” says technologist and researcher Caroline Sinders. “Journalists could easily capture all instances of harassment and forward them to an editor or trusted colleague for advice.”145Caroline Sinders, interview by PEN America, June 4, 2020. Users need to be able to capture all publicly available data for abusive content with just one click. Better still, a documentation feature could automatically record all key data for any content proactively detected by toxicity filters (see “Building a shield and dashboard,” above).

Existing features and tools: Most social media platforms do not offer any tools that facilitate the documentation of online abuse. Twitter comes closest. When users report abusive tweets, they can request that Twitter email them some key information, but users have to proactively report abuse and proactively request this information.146Kaofeng Lee, Ian Harris, “How to Gather Technology Abuse Evidence for Court,” National Council of Juvenile and Family Court Judges, accessed August 2020, (“When you report an abusive tweet to Twitter, you have the option to ask Twitter to email you a report. This report includes: the threatening tweet, the username of the person who tweeted, date and time of the tweet, your account information, and the date and time of your report.”) In other words, on Facebook, Instagram, and Twitter, users must manually take screenshots, save links, track metadata, and create logs of their abuse, all of which can be time-consuming and retraumatizing.

As for existing third-party tools, PEN America was able to identify only two that facilitate documentation and are specifically designed for abuse: JSafe, an app in beta developed by the Reynolds Journalism Institute with the Coalition for Women in Journalism, and DocuSafe, from the National Network to End Domestic Violence. Both of these apps still require users to manually track and enter data, but they offer a single place to store and organize it.147“JSafe: Free mobile application for women journalists to report threats,” Women in Journalism, accessed October 2020,; “DocuSAFE: Documentation and Evidence Collection App,” Technology Safety, Google’s Jigsaw team informed PEN America that they are “experimenting in the space of documentation and reporting to help targets of online harassment manage and take action on the harassment they receive … building on our experience developing Perspective API, which uses machine learning to identify toxic language.”148Email response from Adesola Sanusi, product manager at Jigsaw, February 22, 2021.

Recommendations: Platforms should develop a documentation feature to empower users and their allies to quickly and easily record evidence of abusive content—capturing screenshots, hyperlinks, and other publicly available data automatically or with one click. This feature is especially important for content that is threatening or heightens the risk of physical violence, such as doxing. Specifically, platforms should:

  • Enable users to automatically capture all relevant publicly available data for content that is flagged as abusive via user muting, restricting, blocking, or reporting, as well as proactive content filtering. To ensure that the feature meets evidentiary requirements for legal proceedings, platforms need to work with civil society, law enforcement, and legal experts to create documentation standards in each country and jurisdiction where they operate.
  • Enable users to manually document abusive content with one click and to manually add additional data, including context and relevant hashtags, to supplement automatic documentation.
  • Enable users to download or export documented abuse so it can then be shared with third parties such as nonprofit organizations, employers, support networks, and legal counsel.
  • Ensure that the documentation feature, which will require at least some engagement with abusive content, is user-friendly and designed using trauma-informed, ethical frameworks.

Muting, blocking, and restricting: Improving and standardizing existing features

The challenge: Users targeted by online abuse, including writers and journalists, need to have the ability to limit contact with an abuser and to control or limit exposure to abusive content. While most platforms are building increasingly sophisticated features for these purposes, there is no consistency across platforms in terms of language, functionality, or the mitigation of potential unintended consequences.149Kat Lo, “Toolkit for Civil Society and Moderation Inventor,” Meedan, November 18, 2020, (“Different social media platforms use different terms to describe similar or identical moderation features, and conversely, use the same terms to describe moderation features that are implemented differently across platforms.”) Each feature works somewhat differently on each platform, and not every platform offers every feature, making it confusing and time-consuming for users to understand the reactive measures available to them.

Screenshot demonstrating how to restrict an abusive account on Instagram.
Screenshot demonstrating how to restrict an abusive account on Instagram. Photo by Instagram

Existing features and tools: 

Blocking allows users to cut off contact and communication with abusers.150Kat Lo, “Toolkit for Civil Society and Moderation Inventor,” Meedan, November 18, 2020, But as Pulsar, an audience intelligence company, explained in a 2018 audit of blocking features: “Blocking remains a highly inconsistent experience on different social platforms.”151Victoria Gray, “The Most Confusing and Necessary Social Media Feature: The State of Blocking in 2018,” Pulsar, June 5, 2018, On Facebook, targets have to block the abuser on messenger and then separately block the abuser’s profile in order to ensure that their public-facing content is no longer visible to the blocked abuser.152“What Is Blocking on Facebook and How Do I Block Someone?,” Facebook Help Center, accessed March 2021,; “What happens when I block messages from someone while I’m using Facebook?,” Facebook Help Center, accessed November 2020, On Twitter, a blocked abuser cannot see any information on the target’s profile (except their profile photo).153Victoria Gray, “The Most Confusing and Necessary Social Media Feature: The State of Blocking in 2018,” Pulsar, June 5, 2018, On Instagram, most of the target’s account (except their name, profile photo, mutual followers, and number of posts) disappears from the blocked abuser’s view.154Mehvish Mushtaq, “What Happens When You Block Someone on Instagram,” Guiding Tech, April 4, 2019, Instagram offers the most robust blocking features, retroactively removing the comments and likes from blocked accounts and enabling users to select up to 25 comments and then block, en masse, all the accounts posting those comments.155Jeff Yeung, “Instagram Brings in New Features to Curb Spam and Offensive Comments,” Hypebeast, May 13, 2020,; “How can I manage multiple comments on my Instagram posts?” Instagram Help Center, accessed December 7, 2020

Muting allows users to make abusive content invisible, but only from their individual perspective (not from all users). The muted material can be a specific piece of content, a specific user, a keyword, or notifications.156Lo, Kat, “Toolkit for Civil Society and Moderation Inventor,” Meedan, November 18, 2020, Like blocking, muting works differently on each platform, and the options can get granular. Furthermore, different platforms use different terms to refer to the act of hiding content. On Twitter, users can “mute” entire accounts, individual tweets, and replies to their tweets, and they can “mute” content by keyword, emoji, or hashtag. But they cannot mute DMs, only the notifications announcing them, and there is no expiration date for muting.157“How to Mute Accounts on Twitter,” Twitter Help Center, accessed August 2020,; “How to Use Advanced Muting Options,” Twitter Help Center, accessed August 2020, On Facebook, there’s no equivalent to muting, but users can “snooze” accounts or groups for 30 days, “mute” other users’ stories, and permanently “unfollow” posts without unfriending accounts.158“How do I Mute or Unmute a Story on Facebook?,” Facebook Help Center, accessed September 2020,; “How do I Unfollow a Person, Page or Group on Facebook?,” Facebook Help Center, accessed August 2020, Facebook users can also “block” comments by keywords and “filter for profanity,” but only on pages,159Email from Facebook spokesperson, January 21, 2021; “Moderate Your Facebook Page,” Facebook for Media, accessed January 12, 2021, not on profiles, and the platform’s muting-like features only partly shield targets from abuse in DMs.160“How do I Turn Comment Ranking On or Off for My Facebook Page or Profile?,” Facebook Help Center, accessed February 9, 2021,; Email Response from Facebook spokesperson, January 21, 2021. Instagram enables users to “filter” comments by keywords or preset filters,161“How Do I Filter Out Comments I Don’t Want to Appear on My Posts on Instagram?,” Instagram Help Center, accessed September 2020, “mute” posts or stories, and “mute” accounts entirely.162“How Do I Mute or Unmute Someone on Instagram?,” Instagram Help Center, accessed September 2020, In other words, muting, filtering, and snoozing overlap in functionality but remain distinct, which is profoundly confusing for users interacting with these features across services.

Blocking and muting, while critically useful features, can have serious drawbacks for vulnerable users, especially journalists and writers. These features can make it harder for targets to assess the risk they are truly facing because they can no longer see if abuse is ongoing, or if it has escalated to threats of physical or sexual violence, doxing, etc. When abusers can see that they have been blocked, they often create new accounts and ramp up abuse.163Elon Green, “Why Blocking Trolls Doesn’t Work,” Time, August 18, 2016, As journalist Davey Alba explained in an interview with the Committee to Protect Journalists: “I was blocking a lot of people on Twitter for a while. That ended up being weaponized against me because people started making new accounts saying, ‘Oh, of course you blocked me, you don’t want to hear different points of view.’ So, I switched to muting accounts instead.”164Lucy Westcott, “NY Times Reporter Davey Alba on Covering COVID-19 Conspiracy Theories, Facing Online Harassment,” CPJ, May 21, 2020, Blocking and muting can be impractical for journalists and writers, who need access to their audiences, sources, and subjects. Cutting those people off can mean missing important information.

Restricting, a feature that Instagram introduced in 2019, addresses several of the drawbacks of blocking and muting outlined above. By restricting an abusive account, the targeted user places all comments from that account behind a screen, which targets can then choose to review and decide whether to publish, delete, or leave “pending” indefinitely. What distinguishes restricting from blocking is that abusers are not alerted to the fact that their ability to communicate with a target has been restricted. Restricting is also different from muting in that the abuser is the only person who can see the restricted content—targeted users and all other users cannot.165Katy Steinmetz, “What to Know About Restrict, Instagram’s New Anti-Bullying Feature,” Time, July 8, 2019, It is worth noting that even within one company, the term “restrict” has different meanings and parameters. On Facebook, users can place an abusive “friend” on a “restricted list” without unfriending that account entirely—which does not correspond to Instagram’s restriction feature.166“How Do I Add or Remove Someone From my Restricted List on Facebook?,” Facebook Help Center, accessed November 2020,

Recommendations: Platforms need to improve and standardize blocking, muting, and restricting features that help users limit their exposure to abusive content and accounts. Specifically, platforms should:

  • Offer all three mechanisms—blocking, muting, and restricting—and apply each of these mechanisms consistently across all different forms of communication, including comments, DMs, tags, and mentions, and across desktop and mobile apps, regardless of device or browser type.
    • Like Twitter and Instagram, Facebook should offer the option to mute by specific kinds of content, such as keywords and emojis.
    • Twitter, Facebook, and Instagram should allow users to mute (rather than just block) DMs.
    • Like Instagram, Twitter and Facebook should allow users to block, mute, or restrict accounts by flagging comments in bulk.
    • Twitter and Facebook should offer functionalities akin to Instagram’s new restrict feature.
  • Convene a multi-stakeholder coalition of technology companies, civil society organizations, and vulnerable users—or enlist an existing coalition such as the Global Network Initiative,167“Global Network Initiative,” Global Network Initiative, Freedom of Expression and Privacy, July 26, 2020, Online Abuse Coalition,168“Coalition on Online Abuse,” International Women’s Media Foundation, accessed March 2021, or Trust & Safety Professional Association169“Overview,” Trust and Safety Professional Association, accessed March 2021,—to standardize the terms and language used to describe these features based on their core functionalities and license them openly for reuse.


Mitigating risk: Muting is relatively low stakes, but blocking and restricting are riskier, potentially inhibiting transparency and public discourse. While the intended use of blocking or restricting is to limit exposure to an abusive account, these features can also be used to buffer a user, including a public official, from legitimate criticism or from ideas they do not like. A user who has been blocked by another user on Twitter, for example, can discover that they have been blocked because the two parties can no longer communicate. But a user who has been restricted by another user on Instagram has no way of knowing that their restricted content has been hidden—a highly effective way to curb retaliation and the escalation of abuse, but at the cost of transparency. To mitigate these outcomes, restriction should be strictly confined to limiting the visibility of comments posted by a restricted account in response to posts published by the restricting account. Restriction should not affect a restricted account’s ability to comment anywhere else on the platform. Instagram, which is the only platform to currently offer restriction, got this balance right; other platforms that introduce restriction should follow suit.170Kelly Wynne, “What is Instagram’s Restrict Feature and How to Use it,” Newsweek, October 2, 2019, (“If a user chooses to restrict any individual, all of their future comments will be invisible to the public. This only pertains to comments on posts by the person who restricted them.”)

From a free expression standpoint, it is important to recognize that blocking does not stop someone from speaking freely on a social media platform but rather prevents one person from communicating directly with another. PEN America believes that most users of individual social media accounts are entitled to limit direct communication with other users, especially those engaging in abusive conduct. From the standpoint of transparency and accountability, however, the situation becomes more complicated for the professional accounts of public officials and government institutions. Some public officials have used the blocking feature to cut off critics, including their constituents.171“In August, ProPublica filed public-records requests with every governor and 22 federal agencies, asking for lists of everyone blocked on their official Facebook and Twitter accounts. The responses we’ve received so far show that governors and agencies across the country are blocking at least 1,298 accounts… For some, being blocked means losing one of few means to communicate with their elected representatives… Almost every federal agency that responded is blocking accounts.” Leora Smith, Derek Kravitz, “Governors and Federal Agencies Are Blocking Nearly 1,300 Accounts on Facebook and Twitter,” ProPublica, December 8, 2017, In 2019, the Knight First Amendment Institute filed a lawsuit on behalf of Twitter users blocked by then-President Donald Trump. The court ruled that it was unconstitutional, on First Amendment grounds, for the president to block followers because Twitter is a “designated public forum” and @realDonaldTrump was “a presidential account as opposed to a personal account.”172Knight First Amendment Inst. at Columbia Univ. v. Trump, No. 1:17-cv-5205 (S.D.N.Y. 2018)

It should be noted, however, that public officials who identify as women, LGBTQIA+, BIPOC, and/or members of religious or ethnic minorities are disproportionately targeted by online abuse and, in some cases, have been driven from public service.173Lucina Di Meco, “#ShePersisted Women, Politics & Power In The New Media World,” #ShePersisted, Fall 2019,; Abby Ohlheiser, “How Much More Abuse Do Female Politicians Face? A Lot,” MIT Technology Review, October 6, 2020,; “Why Twitter Is a Toxic Place for Women,” Amnesty International, accessed January 20, 2021,
While all public officials are subject to the same obligations regarding transparency and accountability, many of the recommendations outlined in this report would significantly reduce the burden of online abuse for public officials and their staff, without sacrificing their obligations to their constituents.

Reporting: Revamping the user experience

The challenge: The ability of targeted users—and their allies—to report abusive content and accounts is foundational for reducing online harms and a fundamental part of any system of accountability. The goal of reporting is to ensure that abusive content is taken down and that abusive accounts face consequences. But user reporting is just one part of the larger content moderation system, which is often ineffective in ensuring that content that meets the threshold of abuse and violates platform policies is taken down.174“Toxic Twitter-The Reporting Process,” Amnesty International, 2018,

Content moderation is a complex, imperfect process, and the line between harassment and combative but legitimate critique is often murky. Civil society organizations hear regularly from writers, journalists, artists, and activists about the mistaken removal of creative or journalistic content that does not violate platform policies, while violative content flagged as abuse remains. In addition, the removal process is often opaque. There is an urgent need to reform the larger content moderation process to make it more effective, equitable, and accountable, and multiple excellent reports have provided robust recommendations.175Paul Barrett, “Who Moderates the Social Media Giants?,” NYU Stern Center for Business and Human Rights, June 2020,; Robyn Caplan, “Content or Context Moderation,” Data & Society, 2018,; “Freedom and Accountability: A Transatlantic Framework for Moderating Speech Online,” Annenberg Public Policy Center, 2020,; Lindsay Blackwell et al., “Classification and Its Consequences for Online Harassment: Design Insights from Heartmob,” Association for Computing Machinery 1, no. CSCW (December 2017): 24,; Kate Klonick, “The New Governors: The People, Rules, and Processes Governing Online Speech,” Harvard Law Review, 131, no. 6 (2018): 1598-1670,; Robert Gorwa, Reuben Binns, and Christian Katzenbach, “Algorithmic content moderation: Technical and political challenges in the automation of platform governance,” Big Data & Society 7, no. 1 (January 2020): 1-15,; Joseph Seering et al., “Moderator engagement and community development in the age of algorithms,” New Media and Society, Vol 21, Issue 7 (2019),; Sarah Myers West, “Censored, suspended, shadowbanned: User interpretations of content moderation on social media platforms,” New Media & Society 21, no. 7 (July 2019): 1417–43,; Gennie Gebhart, “Who Has Your Back? Censorship Edition 2019,” Electronic Frontier Foundation, June 12, 2019,; “Use of AI in Online Content Moderation,” Cambridge Consultants on behalf of OFCOM, 2019, Rather than tackling the broader subject of content moderation in this report, PEN America focuses on examining reporting features from the standpoint of the user experiencing the abuse and doing the reporting.

In PEN America’s 2017 survey studying the impact of online harassment on writers and journalists, of the 53 percent of respondents who had alerted social media platforms to the harassment they were experiencing, 71 percent found the platform unhelpful.176“Online Harassment Survey: Key Findings​,” PEN America (blog), April 17, 2018, Three years later, many of the experts and journalists PEN America consulted for this report concurred. “I’d say it’s 50/50 when I report, in terms of whether or not Twitter responds with action,” says journalist Emily Burack.177Emily Burack, interview with PEN America, June 15, 2020. Journalist Jami Floyd has found trying to contact the company “a waste of time.”178Jami Floyd, interview with PEN America, June 6, 2020. According to a 2020 study of online hate and harassment conducted by the Anti-Defamation League and YouGov, 77 percent of Americans want companies to make it easier to report hateful content and behavior, up from 67 percent in 2019.179“Online Hate and Harassment: An American Experience,” Anti-Defamation League, June 2020,

Reporting features are often confusing and labor-intensive, placing undue burden on targets of abuse. There is very little consistency across platforms—or even within platforms—about how reporting features work.180Caroline Sinders, Vandinika Shukla, and Elyse Voegeli, “Trust Through Trickery,” Commonplace, January 5, 2021, All platforms have policies governing acceptable conduct and content, but when users are in the process of reporting, they rarely have quick and easy access to relevant policies.181Facebook’s reporting feature is the most useful in that it includes an excerpt of relevant policies, but no link to the policies themselves; Twitter, within its reporting feature, includes a link to its overall policy page, but not to specific relevant pages, nor does it include relevant excerpts. Instagram includes neither links to policies, nor excerpts. Figuring out if a specific piece of content actually violates a particular policy can be confounding and time-consuming. “Explaining policies would help users interpret criticism versus harassment,” says Jason Reich, vice president of corporate security at The New York Times.182Jason Reich, interview with PEN America, June 9, 2020.

These challenges are exacerbated by reporting features that often direct users to select from preset categories of harmful content that rarely align, in language or concept, with specific policies or with the actual experiences of users. For instance, when users report content on Twitter, they must indicate how a specific account is harmful by selecting from preset options that include “targeted harassment,” which is separate from “posting private information,” “directing hate,” “threatening violence,” or “being disrespectful or offensive.” Nowhere in the reporting process—or in its “abusive behavior” policy—does Twitter explain what it means by “targeted harassment” (as opposed to just “harassment” or “abuse”).183“Abusive Behavior,” Twitter, accessed October 2020, This is just one example of many that PEN America found across Twitter, Facebook, and Instagram.

Some platforms funnel all reporting through a single channel. Others treat specific abusive tactics separately—distinguishing impersonation, doxing, and blackmail from harassment and requiring users to provide additional evidence to report such cases. Many users find themselves bombarded by hundreds or thousands of abusive messages in the midst of a coordinated campaign, yet few platforms allow them to report abusive content or accounts in bulk, forcing them to manually report individual content and accounts piecemeal.184Caroline Sinders, Vandinika Shukla, and Elyse Voegeli, “Trust Through Trickery,” Commonplace, January 5, 2021, The Facebook Civil Rights Audit, an independent investigation of the platform’s policies and practices through the lens of civil rights impact published in 2020, advocated for the introduction of bulk reporting of harassment, but Facebook has not acted on this recommendation.185Laura W. Murphy et al., “Facebook’s Civil Rights Audit—Final Report,” (July 8, 2020),; Laura W. Murphy et al., “Facebook’s Civil Rights Audit – Final Report,” (June 30, 2019),

Research for this report revealed a range of user needs that are not currently being met. Some writers and journalists would like to see a quicker, easier, one-click reporting feature. Others emphasize the importance of being able to add context, including the migration of online threats to other communication channels, which is a red flag for heightened risk. Still others want to be able to indicate when they were being attacked in a coordinated way across platforms (especially platforms owned by the same company) or to clarify, for example, why a particular insult could actually be a coded hateful slur.186Laura E. Adkins, interview with PEN America, June 15, 2020; Jane R. Eisner, interview with PEN America, May 27, 2020; Ela Stapley, interview with PEN America, May 15, 2020; Interviews with PEN America, May—August 2020. Technologist Caroline Sinders argues that users should be able to pull up past reports and to draft, reopen, and review reports.187Caroline Sinders, interview with PEN America, June 4, 2020.

Existing features and tools: Twitter, Facebook, and Instagram all offer reporting features. A Facebook or Instagram user whose content has been removed can appeal and then escalate their case to the Facebook Oversight Board, a nominally independent body that launched in 2020 that will review a small subset of content moderation decisions and has the power to reinstate content. But no platform gives users who report abusive content a formal mechanism to appeal or escalate decisions for cases in which the abusive content they reported is not removed.

Twitter comes closest, offering designated communication channels for select newsrooms and nonprofits.188Email response from Twitter spokesperson, October 30, 2020. Facebook has what a spokesperson calls “various, purposefully non-public channels available to specific partners.”189Email response from Facebook spokesperson, January 21, 2021. Essentially, a system has emerged in which only individuals or institutions, such as newsrooms and nonprofits, with personal connections at social media platforms are able to escalate individual cases. The ad-hoc, informal, and interpersonal nature of these escalation channels is inherently unpredictable, inequitable, and impossible to scale. “Through the years, we have experienced ups and downs in our relationships with tech companies, and we often still struggle to find the right connections to deal effectively with digital security incidents,” says Daniel Bedoya, director of the global 24-7 Digital Security Helpline for nonprofit Access Now. “We are keen to work with platforms in designing and enabling sustainable and scalable mechanisms to protect some of the most at-risk individuals and organizations in the world.”190Email from Daniel Bedoya (Access Now), January 28, 2021.

Recommendations: Platforms should revamp reporting features to be more user-friendly, responsive, and trauma-informed. Specifically, platforms should:

  • Ensure clarity and consistency between reporting features and policies within platforms. When reporting, users usually select from preset choices to indicate how the content violates the rules. The language used in these preset choices to describe prohibited abusive tactics must be harmonized with the language used in platform policies. When using the reporting features, users should be able to quickly and easily access relevant policies so they can check, in real time, whether the content they are reporting likely violates those policies.
  • Streamline the reporting process. Platforms should streamline the process by creating a single channel for reporting abusive content or accounts (rather than distinct and divergent channels for harassment, doxing, impersonation, etc.) and requiring as few mandatory steps as possible.
  • Create a flexible and responsive report management system. For users seeking more in-depth engagement with the reporting process, often because they are facing complex or coordinated abuse, platforms should:
    • Allow users to create a draft of a report, add to it later, and combine multiple reports.
    • Enable users to see all their past reports and review where they are in the reporting process.
    • Offer an option for users to include additional context—to explain cultural or regional nuances in language, for example, or to flag symbols or images being used to abuse or spread hate.
  • Add bulk reporting. In recognition of the coordinated nature of some harassment campaigns, platforms need to give users the ability to report multiple abusive accounts and content in bulk, to reduce the burden of piecemeal reporting.
  • Create a formal, publicly known appeals or escalation process. When users report abusive content that they perceive to be in violation of platform policies and that content is not taken down, they should have accessible avenues to appeal or escalate their case. The appeal or escalation process should be formal, public, integrated into the reporting process, and available to all users who report harassment. When users file an appeal or escalation, they should be able to amend their case, adding context or additional related abusive content.
  • Provide prompts offering additional support. When used in good faith, reporting is a signal that a user is being harmed on a platform. Users who report abuse should be nudged toward additional in-platform features to help mitigate abuse, as well as toward external resources, ideally filtered by identity, location, and needs (see “Anti-abuse help centers,” below).


Mitigating risk: It is important to recognize that bulk reporting can be weaponized. Reporting has been wielded by abusers to trigger account suspensions, shutdowns, and the removal of posts. Offering the ability to report multiple pieces of content and accounts without requiring users to designate specific, violating content for each one may exacerbate this problem.191“Online Harassment of Reporters: Attack of the Trolls,” Reporters Without Borders, accessed August 2020, Users whose content has been taken down or whose accounts have been suspended due to malicious reporting can already appeal to platforms to have their content or account reinstated, though the process is not always effective and needs significant improvement (see “Appeals,” below). Users can also turn to civil society organizations that have informal escalation channels with the platforms. However, the introduction of bulk reporting necessitates further mitigation. Platforms may need to limit how many accounts or pieces of content can be reported at a time, or to activate the bulk reporting feature only when automated systems identify an onslaught of harassment that bears hallmarks of coordinated inauthentic activity. Ultimately, risk assessment, consultation with civil society, and user testing during the design process would reveal the most effective mitigation strategies.

SOS button and emergency hotline: Providing personalized support in real time

The challenge: Some abusive tactics—such as threats of physical or sexual violence or massive coordinated campaigns—can put the targeted user into such an acute state of distress that they have difficulty navigating the attack in the moment. “When you’re experiencing stress, your body goes into an alarm state,” says Elana Newman, professor of psychology at the University of Tulsa and research director at the Dart Center for Journalism and Trauma. “All your key systems slow down so that your emergency system can work, and you either freeze, run away, or prepare to fight back. In an emergency mode, you’re not thinking rationally.”192Elana Newman, interview with PEN America, August 28, 2020.

Platforms face a design challenge in meeting the needs of both non-traumatized users seeking to report problematic content and traumatized, actively distressed targets contending with severe abuse, whose needs are quite different and currently poorly supported. The majority of writers, journalists, and other experts we interviewed emphasized the platforms’ lack of responsiveness and stressed the urgent need for customized support in real time.193Ela Stapley, Pamela Grossman, Jasmine Bager, Talia Lavin, Jason Reich, Carrie Goldberg, Lu Ortiz, Lucy Westcott, Jaclyn Friedman, and many others. Interviews with PEN America, May 2020—February 2021. Users need a way to indicate that they are experiencing extreme online abuse and a mechanism for urgently accessing personalized assistance.

Existing features and tools: While Twitter, Facebook, and Instagram provide users with various features to respond to online abuse, many of them discussed in depth throughout this report, these features are rarely designed to reduce trauma and its triggers. All three platforms provide self-guided support experiences, eschewing costlier approaches that would better aid those experiencing ongoing or extreme abuse. None offer customer support staffed by human beings for users experiencing abuse. Facebook users have to dig through over a hundred forms, find the one that most closely corresponds to their problem, fill it out, and wait. Some of these forms, such as those for impersonation or images that violate privacy rights, correspond to specific abusive tactics, but none explicitly deal with online harassment.194Kristi Hines, “How to Contact Facebook and Get Support When You Need It [Ultimate Guide],” Post Planner, February 4, 2020,; Steven John, “How to Contact Facebook for Problems with Your Account and Other Issues,” Business Insider, June 14, 2019, Twitter also funnels users to forms, though the company does at least group the forms for reporting abuse in one place.195“Contact Us,” Help Center, Twitter Support, accessed January 20, 2021, Instagram theoretically has a customer support email, but its users are unlikely to receive a reply from a human being.196Steven John, “How to Contact Instagram for Help with Your Account, or to Report Other Accounts,” Business Insider France, August 1, 2019,

Several platforms have experimented with panic buttons. In 2010, the Child Exploitation and Online Protection Center (CEOP) in the U.K. lobbied Facebook to offer a panic button to instantly provide guidance for children being bullied or threatened online. Facebook resisted but ultimately bowed to public pressure and created a convoluted workaround: a separate app that, once installed, added a tab with CEOP’s logo to a child’s account profile. If clicked, the tab reportedly directed children to Facebook’s “Safety for teens” page and a panic button to report abuse to law enforcement.197Martin Bryant, “Facebook Gets a ‘Panic Button’. Here’s How it Works,” The Next Web, July 12, 2010,; Caroline McCarthy, “Facebook to Promote New U.K. Safety App,” CNET, July 12, 2010, It remains unclear how widely this feature was used or whether it was effective in supporting children; it no longer seems to be available on Facebook.

Panic buttons that are not directly integrated into a platform’s primary user experience are ineffective. As digital safety adviser Ela Stapley warns: “Most of these panic buttons don’t work because users don’t use the app.”198Ela Stapley, interview with PEN America, May 15, 2020. In January 2020, the dating app Tinder launched a panic button integrated directly in its platform, allowing users concerned about their safety to tap the button to alert a third-party company called Noonlight, which “will reach out to check on the user and alert emergency responders if needed.”199Dalvin Brown, “Tinder Is Adding a Panic Button for When Bad Dates Go Horribly Wrong,” USA Today, January 23, 2020,; Rachel Siegel, “You Swiped Right But It Doesn’t Feel Right: Tinder Now Has a Panic Button,” The Washington Post, January 23, 2020, (“Tinder users who add Noonlight to their profiles can enter information about a meet up, such as whom and where they are meeting. If a user taps the panic button, Noonlight will prompt them to enter a code. If the user doesn’t follow up, a text will come through from Noonlight. If there’s no response, Noonlight will put in a call. And if there’s still no answer, or other confirmation of an emergency, Noonlight will summon the authorities.”) Tinder has not yet published data on the use or efficacy of this promising feature.

Recommendations :

  • Platforms should create an SOS button that users could activate to instantly trigger additional in-platform protections and to access external resources. Users could proactively set up and customize these protections in advance, including:
    • Turning on or turning up toxicity filters (see “Building a shield and dashboard,” above).
    • Tightening security and privacy settings (see “Safety modes,” above).
    • Documenting abuse with one click or automatically (see “Documentation,” above).
    • Activating a rapid response team to provide support (see “Rapid response teams and delegated access,” above).
    • Accessing external resources such as emergency mental health counseling, legal counseling, and cybersecurity help (see “Anti-abuse help centers,” below).
  • Platforms should create an Emergency hotline (phone and/or chat) that provides users facing extreme online abuse (such as cyberstalking or a coordinated mob attack) with personalized, trauma-informed support in real time.
  • To ensure user-friendliness and accessibility, platforms should fully integrate an SOS button and emergency hotline directly into the primary user experience. Better still, platforms could use automated detection to nudge users about these features, as Facebook does with its suicide prevention efforts.200“Suicide Prevention,” Facebook Safety Center,

Anti-abuse help centers: Making resources and tools accessible

The challenge:  Many of the writers and journalists in PEN America’s network are often unaware of the in-platform features and third-party tools that already exist to mitigate online abuse. In a 2017 survey of writers and journalists who had experienced online abuse, PEN America found that nearly half of respondents reported changing social media settings only after they were harassed.201“Online Harassment Survey: Key Findings,” PEN America, accessed September 2020, As reporter Kate Steinmetz wrote in an article in Time highlighting Instagram’s new anti-harassment features: “The company has launched other tools meant to help users protect themselves from bullies in the past—like a filter that will hide comments containing certain keywords that a user designates—but many remain unaware that such tools exist.”202Katy Steinmetz, “What to know about Restrict, Instagram’s New Anti-Bullying Feature,” Time, July 8th 2019,

Third-party tools that have emerged to counter online abuse (analyzed throughout this report) are urgently needed and very welcome, but they, too, face an array of challenges. Most are still in early stages of development, and few are sufficiently well financed to be as effective as they could be. As digital safety adviser Ela Stapley explains: “Journalists become disheartened when the tool is buggy or the user experience is bad. They also get frustrated when a tool they found useful suddenly stops working because it has run out of funding. They just want a tool that works well and is easy to set up.”203Ela Stapley, email to PEN America, February 2, 2021.

Some of these tools involve costs for the consumer, an insurmountable obstacle for many staff and freelance journalists in an industry under intense financial pressure. Any anti-abuse tool that must be linked to sensitive accounts, such as social media or email, should be audited by independent cybersecurity experts, but this cost can be prohibitive for the developers. Perhaps the biggest challenge, however, is that social media platforms fail to highlight, integrate, or support the development of these third-party tools.204Facebook’s data lockdown is a disaster for academic researchers, The Conversation, April 2018,

Existing features and tools: Twitter, Facebook, and Instagram all have help centers, with some content specifically focused on online abuse. Both Twitter’s multipage “Abuse”205“Safety and Security,” Twitter, accessed January 13, 2021, and Instagram’s “Learn how to address abuse”206“Learn How to Address Abuse,” Instagram Help Center, accessed January 13, 2021,[0]=Instagram%20Help&bc[1]=Privacy%20and%20Safety%20Center page are relatively easy to find, but the guidance they offer is bare-bones and general. On Twitter’s page headed “How to help someone with online abuse,”207“How to Help Someone with Online Abuse,” Twitter Help Center, accessed January 22, 2021,violations%20of%20the%20Twitter%20Rules for example, the platform does not explain that users can use TweetDeck to give allies delegated access to their accounts to help manage abuse. Instagram does not explain the strategic differences between blocking, muting, filtering, and restricting. Facebook’s guidance and resources for navigating abuse are more robust yet much harder to find and more confusing to parse. Its “Abuse resources”208“Abuse Resources,” Facebook Help Center, accessed January 13, 2021 page, within the help center, is just a brief Q&A that could significantly benefit from more direct, specific, and nuanced guidance. Its section on non-consensual intimate imagery209“Not Without My Consent,” Facebook Safety Center, accessed January 13, 2021, is more robust but hard to find, and it does not address the myriad other abusive tactics that adult users face. Most of these anti-harassment resources exist outside the primary user experience, and users are not alerted to their existence with proactive nudges. Many of them seem to be primarily aimed at children, parents, and educators rather than adults.210“Put a Stop to Bullying,” Facebook, accessed December 8, 2020,

While platform help centers do not specifically address the needs of journalists, a number of the third-party tools analyzed throughout this report do. Some have been built by research teams within big technology companies.211“A safer internet means a safer world,” Jigsaw, accessed December 8, 2020, A handful were developed by the private sector and seem to have viable revenue models.212“About Us,” DeleteMe, accessed December 8, 2020, “Delete many tweets with one click!,” TweetDeleter, accessed December 8, 2020, But the majority of these tools and services have been built with limited resources by universities, nonprofits, and/or technologists launching startups.213About HeartMob,” HeartMob, accessed December 8, 2020,; “JSafe: Free mobile application for women journalists to report threats,” The Coalition For Women in Journalism, accessed December 8, 2020,; Katilin Mahar, Amy X. Zhang, David Karger, “Squadbox: A Tool to Combat Email Harassment Using Friendsourced Moderation,” CHI 2018: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, no. 586 (April 21, 2018): 1-13, doi/10.1145/3173574.3174160; Haystack Group, accessed October 2020,; “Founding story,” Block Party, accessed December 8, 2020,; “Product,” Tall Poppy, accessed March 1, 2021,


  • Platforms should build robust, user-friendly, and easily accessible sections within help centers that expressly address online abuse. Specifically, Twitter, Facebook, and Instagram should:
    • Outline and adequately explain all the features they already offer to address online abuse.
    • Develop content specifically tailored to the needs of writers and journalists, taking into account that they rely on social media to do their work.
    • Provide a visually prominent link directly to this information in the main user experience. Like fire alarms, these links can be unobtrusive but should be right at hand in case they are needed.
    • Use nudges, sign-on prompts, interactive tip sheets, graphics, videos, and quizzes to regularly educate users about in-platform anti-abuse features.
    • Make the information available in multiple languages, as well as in large print and audio, to ensure wider accessibility.
    • Invest in training vulnerable users with specific needs, including journalists and writers, on how to use anti-harassment features.
    • Direct users to external resources, including online abuse self-defense guides,214“Online Harassment Field Manual,” PEN America, accessed September 30, 2020,; “OnlineSOS Action Center,” OnlineSOS, accessed September 30, 2020. cybersecurity help lines and tools,215“Home,” Tall Poppy, accessed September 2020,; “Digital Security Helpline.” Access Now (blog), accessed September 2020, mental health services,216“Lifeline,” National Suicide Prevention Lifeline, accessed January 26, 2021,; “LGBT National Youth Talkline,” LGBT National Help Center, accessed January 26, 2021,; “About the National Sexual Assault Telephone Hotline,” RAINN, accessed January 26, 2021,; “Helpline” Vita Activa, accessed January 22, 2021, and peer support.217“Self Care for People Experiencing Harassment,” HeartMob, accessed October 2020,; “Resources for Journalists,” TrollBusters, accessed October 2020, Because direct referral from a global platform will exponentially increase the volume of requests for help, platforms should consult with and support the organizations responsible for staffing and maintaining those resources, as Reddit has done in its partnership with the Crisis Text Line for users at risk of suicide and self-harm.218Sarah Perez, “Reddit Partners and Integrates with Mental Health Service Crisis Text Line,” TechCrunch, March 5, 2020,
  • Platforms should support the creation of promising new third-party tools designed to counter online abuse—especially those built by and for women, BIPOC, and/or LGBTQIA+ technologists with firsthand experience of online abuse—by investing in R&D and providing access to application programming interfaces (APIs), data, and other relevant information.

Disarming Abusive Users

The burden of dealing with online abuse often rests squarely on the shoulders of its targets, and those targets are often women, BIPOC, LGBTQIA+, and/or members of religious and ethnic minorities. Its impact, from the strain on mental health to the chilling effects on speech and career prospects, stands in stark contrast to the point-and-click ease with which abusers inflict it.“I want harassment to be as annoying for my harassers as it is for me to report it,”219Talia Lavin, interview with PEN America, May 25, 2020. says journalist Talia Lavin.

Online abuse cannot be addressed solely by creating tools and features that empower the targets of abuse and their allies. Platforms must also actively hold abusive users to account. “By allowing harassers to function with immunity,” they “exacerbate harm,” Soraya Chemaly, executive director of the Representation Project, explained in a 2015 interview with Mic.220Julie Zeilinger, “One of the Biggest Reasons Harassment Persists on Social Media Is One We Never Talk About,” Mic, March 26, 2015,

As noted in the introduction and throughout this report, efforts to deter abuse must be balanced against competing priorities: to protect critical speech and prevent the silencing of legitimate dissenting viewpoints, which may include heated debate that does not rise to the level of abuse as well as humor, satire, and artistic expression that can be mistaken for abusive content. To that end, PEN America’s recommendations seek to deter abuse without unduly increasing the platforms’ power to police critical speech, which threatens all users’ free expression rights.

A foundational principle of our analysis is that, although the lines can sometimes seem clear, any person can be a target, a bystander, or an abuser, depending on their behavior.221Justin Cheng et al., “Anyone Can Become a Troll: Causes of Trolling Behavior in Online Discussions,” (2017), Furthermore, it can be useful to distinguish between casual abuse and committed or automated abuse. Casual abuse may include, for example, individuals who are engaged in unfocused nastiness for sport or who get swept up in online anger or vitriol against a target. Committed or automated abuse includes individuals, bots, groups, and state actors engaging in coordinated and premeditated abuse against particular targets, often with particular outcomes in mind. Both are problematic and harmful, but casual abuse warrants a different set of interventions than the strategies being used to battle coordinated inauthentic activity, such as mass content takedowns, rapid account deletions, and forensic investigations.222Craig Timberg, Elizabeth Dwoskin, “Twitter is sweeping out fake accounts like never before, putting user growth at risk,” The Washington Post, July 6, 2018,; Mae Anderson, “Twitter and Facebook delete foreign state-backed accounts,” PBS, December 20, 2019, In this section, our recommendations primarily focus on deterring casual abuse.

Nudges: Using design to discourage abuse

The challenge : Social media platforms are built to encourage immediacy, emotional impact, and virality because those characteristics heighten the user engagement that is fundamental to their business models.223“If exposing users to others’ emotions keeps them engaged, and if engagement is a key outcome for digital media, [as a business strategy] digital media companies should try to upregulate users’ emotions by increasing the frequency and intensity of expressed emotions…This is likely to magnify emotion contagion online.” Amit Goldenberg, James J. Gross, “Digital Emotion Contagion,” Trends in Cognitive Sciences 24, no. 4 (April 2020): 316–328, The result of such incentives, according to Dr. Kent Bausman, a sociology professor at Maryville University, is that social media “has made trolling behavior more pervasive and virulent.”224Peter Suciu, “Trolls Continue to be a Problem on Social Media,” Forbes, June 4, 2020, Researchers at the University of Michigan recently highlighted their “concerns about the limitations of existing approaches social media sites use” to curb abuse—namely, restrictive tactics like “removing content and banning users.” They advocate for educating users rather than just penalizing them.225Laurel Thomas, “Publicly shaming harassers may be popular, but it doesn’t bring justice,” Michigan News, April 2020,

One way to do this is through nudges—interventions that encourage, rather than force, changes in behavior by presenting opportunities for feedback and education.226“Molly Crockett at Yale’s Crockett Lab has suggested that our inability to physically see the emotional reactions of others might encourage negative behavior on social media.” Tobias Rose-Stockwell, “Facebook’s problems can be solved with design,” Quartz, 2018,; Kathleen Van Royen et al., “‘Thinking Before Posting?’ Reducing cyber harassment on social networking sites through a reflective message,” Computers in Human Behavior 66 (January 2017): 345-352,; Yang Wang et al., “Privacy Nudges for Social Media: An Exploratory Facebook Study,” WWW ‘13 Companion: Proceedings of the 22nd International Conference on World Wide Web (May 2013): 763-770, For example, a user in the process of drafting a post with abusive language could receive a nudge encouraging them to pause and reconsider. Along these lines, Karen Kornbluth, director of the Digital Innovation and Democracy Initiative at the German Marshall Fund, has called for platforms to counter virality by introducing friction—design elements that nudge users by making certain behaviors less convenient or slower—to “make it harder to spread hate and easier to engage constructively online.”227Karen Kornbluth, Ellen P. Goodman, “Safeguarding Digital Democracy: Digital Innovation and Democracy Initiative Roadmap,” March 2020,; Email response from Karen Kornbluth, August 25, 2020. (According to an email to PEN America, Karen Kornbluth has shifted from the term “light patterns” to the term “empowerment patterns” since the aforementioned article was published.) The use of nudges has the major advantage of preserving freedom of expression, giving users an opportunity to make informed and deliberate decisions about how they choose to act.

Screenshot of Twitter's experiments with nudges, which pop up to prompt users to reconsider the language of a potentially abusive post.
Screenshots of Twitter and Instagram’s experiments with nudges which pop up to prompt users to reconsider the language of a potentially abusive post.
Screenshot of Instagram's experiments with nudges, which pop up to prompt users to reconsider the language of a potentially abusive post.
Photos are screenshots from Twitter and Instagram

That said, nudges are no panacea. The jury is still out on how effective specific types of nudges actually are,228Cass R. Sunstein, “Nudges That Fail” (July 18, 2016),; Samuel Hardman Taylor et al., “Accountability and Empathy by Design: Encouraging Bystander Intervention to Cyberbullying on Social Media,” Proceedings of the ACM on Human-Computer Interaction 3 (November 2019),; Taylor Hatmaker, “Twitter plans to bring prompts to ‘read before you retweet’ to all users,” TechCrunch, September 24, 2020,; Susan Benkelman, Harrison Mantas, “Can an accuracy ‘nudge’ help prevent people from sharing misinformation?,” Poynter Institute, July 16, 2020,; Alessandro Acquisti et al., “Nudges for Privacy and Security: Understanding and Assisting Users’ Choices Online,” ACM Computing Surveys 50, no. 3, (October 2017), particularly in the absence of meaningful experimental validation. Furthermore, nudges that depend on automation can be prone to false positives because algorithms are vulnerable to the biases of their authors and the data sets on which they are trained.229“The challenge of identifying subtle forms of toxicity online,” Medium, December 12, 2018, In the example offered above, for instance, a nudge could mistakenly flag a post as potentially abusive because it depends on an automated system that has trouble distinguishing between a racial slur and a reclaimed term. 

Existing features and tools: In recent years, platforms have piloted the use of nudges to discourage harmful content, especially disinformation. More should be done to apply this approach to reducing harassment and to evaluate its efficacy. Twitter230Nick Statt, “Twitter tests a warning message that tells users to rethink offensive replies,” The Verge, May 5, 2020, and Instagram231From Instagram: “When someone writes a caption for an Instagram feed post or a comment and our AI detects the caption/comment as potentially offensive, they will receive a prompt informing them that their caption is similar to those reported for bullying. They will have the opportunity to edit their caption/comment before it is posted.” Email response from Instagram spokesperson, January 15, 2021; Eric Ravenscraft, “Instagram’s New Anti-Bullying Nudges Could Actually Work,” OneZero, May 9, 2019,; are currently experimenting with automation that proactively identifies harmful language and nudges users to rethink a reply before sending it. In an email to PEN America, Facebook states that it is also piloting nudges for harassing content, though we were unable to verify this statement.232From Facebook: “In certain scenarios, Facebook will prompt users to re-review their content prior to posting because it looks similar to previous violating posts. This is generally limited to hate speech and harassment and allows the user to edit the post, post as-is, or remove it altogether;” Email response from Facebook spokesperson, January 21, 2021. NOTE: PEN America was unable to independently verify this claim. While Facebook and Instagram claim that such nudges can help reduce abuse,233Email response from Instagram spokesperson, January 15, 2021; Email response from Facebook spokesperson, January 21, 2021. none of the platforms examined in this report have shared data on the efficacy of these interventions.234“OneZero asked Instagram if its existing filters have had any measurable impact on bullying, but the company declined to share specific numbers.” Eric Ravenscraft, “Instagram’s New Anti-Bullying Nudges Could Actually Work,” OneZero, May 9, 2019,


  • Platforms should use nudges to discourage users’ attempts to engage in abusive behavior. One way to do this is to use automation to proactively identify content as potentially abusive and nudge users with a warning that their content may violate platform policies and encourage them to revise it before they post.
  • Platforms should study the efficacy of nudges to curb abuse and publish these findings. Platforms should also communicate clearly and transparently about how the algorithms that inform many of these interventions are trained, including efforts to curb implicit bias.
  • Platforms should give outside researchers access to data—both on the efficacy of nudges and on the data on which algorithms are trained to detect harmful language—so they can independently assess success and flag unintended harm, as well as recommend improvements.

Rules in real time: Educating users and making consequences visible

The challenge:  Norms governing behavior can only work if they have been clearly communicated, understood, and agreed upon by the members of an online community. However, most social media platforms keep their policies governing user behavior in an area distinct from the primary user experience of their product. In an interview with PEN America, Jillian York, director for international free expression at the EFF, says that Twitter’s rules, for example, “are really difficult to find, so most users aren’t even aware of what they are or how to find them before they violate them.” Moreover, a user might break a rule, “and all they’re told is that they broke the rules, not which rule they broke and why or how.”235Jillian York, interview with PEN America, May 21, 2020.

While it is important for platforms to maintain dedicated areas that display all their policies, it is equally critical to fully integrate the most important and relevant guidelines directly within the primary user experience so users can see this information in real time. When users create a new password, for instance, they should not have to go to a separate page to learn about minimum password complexity requirements; that information is included in the same form or window. Similarly, users should be able to quickly check that content complies with key rules before posting without having to click away and search through a separate website. Integrating platform rules could not only reduce casual abuse and increase transparency, but avert the perception of arbitrary or biased enforcement.

Recent research from Stanford University indicates that making community rules more visible, including at the top of comments and discussions sections, increases newcomers’ compliance with them while simultaneously increasing participation.236The Stanford team conducted an experiment randomizing announcements of community rules in large-scale online conversations for a scientific website with 13 million subscribers. Compared with discussions with no mention of community expectations, displaying the rules increased newcomer rule compliance by more than eight percent and increased the participation rate of newcomers in discussions by 70% on average. J. Nathan Matias, “Preventing harassment and increasing group participation through social norms in 2,190 online science discussions,” PNAS 116, no. 2 (2019): 9785-9789, An internal audit at YouTube found that users actively wanted the platform to create clearer policies to make enforcement more consistent, and to be more transparent about enforcement actions (see “Escalating penalties,” below).237“Making our strikes system clear and consistent,” YouTube, accessed December 2, 2020,

Existing features: If Twitter users want to review the guidelines governing acceptable behavior, they have to go to the overall menu on the platform’s app or desktop version and intuit that they will find these guidelines within the “help center,” which then takes them to a webpage outside the primary user experience. From there, users have to head to a page called “Rules and policies” and review over a dozen distinct pages that are, for reasons that remain unclear, spread out between two sections titled “Twitter’s Rules and policies” and “General guidelines and policies.”238“Help Center,” Twitter, accessed January 2021, On Facebook’s app and desktop version, users looking for guidelines have to go to the main menu and embark on a long and winding journey with stops at “help and support,” “help center,” “policies and reporting,” and “about our policies”—none of which actually lead to the platform’s “community standards,” which live on a separate website.239“About our Policies,” Facebook, accessed January 2021,; “Community Standards,” Facebook, accessed October 2020, On Instagram, from within either the app or desktop version, the authors of this report were unable to locate the platform’s “community guidelines” (which live on a separate website240“Community Guidelines,” Instagram, accessed January 2021,[0]=Instagram%20Help&bc[1]=Privacy%20and%20Safety%20Center). Furthermore, as the Facebook Oversight Board asserted, given that Facebook and Instagram belong to the same company, the relationship between Facebook’s extensive “community standards” and Instagram’s shorter “community guidelines” needs to be clarified and their inconsistencies need to be ironed out.241evelyn douek, “The Facebook Oversight Board’s First Decisions: Ambitious, and Perhaps Impractical,” Lawfare, January 28, 2021,


  • Platforms should make their rules—and the consequences for breaking them—easily and directly accessible to users in real time and within the primary user experience.
  • Platforms should use the full suite of design elements—including nudges, labels, and contextual clues—to spotlight relevant rules.
  • Platforms should routinely use policy checkups or reminders, akin to existing interactive privacy checkups on Facebook and Google. Whenever platforms make major changes or updates to their rules governing acceptable behavior, they should proactively call attention to these changes and seek affirmative consent from users.
graphic of crossing guards holding cell phones
Graphic by Wikicommons

Escalating penalties: Building an accountability system for abusive users

The challenge: To effectively tackle online abuse, especially by committed abusers, decisive measures like account suspensions and bans are sometimes necessary. They are also fraught. As discussed throughout this report, overzealous suspensions or bans can compromise free expression. Content moderation is inherently imperfect, especially when it relies on automation, and it has repeatedly been weaponized to silence writers, journalists, and activists.242Sam Biddle, “Facebook Lets Vietnam’s Cyberarmy Target Dissidents, Rejecting a Celebrity’s Plea,” The Intercept, December 12, 2020,  For major and minor infractions alike, however, the consequences for violating rules are not readily visible or clearly communicated to users.

“Platforms should add suggestions of consequences of misconduct in the standards,” says law professor Mary Anne Franks. “They need to have clear rules that say this kind of behavior will result, for instance, in a temporary suspension of an account. They need to say what the enforcement will be. Otherwise, compliance with their standards is a recommendation, not an obligation.”243Mary Anne Franks, interview with PEN America, May 22, 2020. Experiments in the gaming industry underscore the efficacy of explaining to abusers exactly which policies their content has violated and why it led to a penalty.244Laura Hudson, “Curbing Online Abuse Isn’t Impossible. Here’s Where We Start,” Wired, May 15, 2014

This is exactly what the Facebook Oversight Board advised in its first set of recommendations, which called for “more transparency and due process for users, to help them understand the platform’s rules,” according to evelyn douek, a lecturer and doctoral candidate at Harvard Law School. “The FOB shows concern that users who have been found to violate their rules simply cannot know what they are doing wrong, whether because Facebook’s policies are not clear or lack detail or are scattered around different websites, or because users are not given an adequate explanation for which rule has been applied in their specific case.”245evelyn douek, “The Facebook Oversight Board’s First Decisions: Ambitious, and Perhaps Impractical,” Lawfare, January 28, 2021,; see also “Online harassment and abuse against women journalists and major social media platforms,” ARTICLE 19 (2020): 16-17,

Existing features and tools: Some platforms are developing escalating penalties, but these generally remain nascent and poorly publicized to users. They do not actually constitute a transparent and clearly communicated accountability system. Twitter may suspend accounts for violating its rules, including for engaging in abusive behavior, and may require users to verify account ownership or formally appeal to lift a suspension. The company can also place an account in read-only mode, limiting the ability to tweet, retweet, or like content, and may require a user to delete the violating content to restore full functionality. Subsequent violations, it warns, “may result in permanent suspension,” which is a perplexing oxymoron.246“About Suspended Accounts,” Twitter, accessed October 2020,; “Our range of enforcement options,” Twitter, accessed November 2020,; “Hateful Conduct Policy,” Twitter Help Center, accessed January 26, 2021,

In 2019, Instagram began issuing alerts to users whose posts repeatedly violate community guidelines, informing them that their account may be banned if they persist and providing them with a history of the relevant posts and the reasons for their removal.247Jacob Kastrenakes, “Instagram will now warn users close to having their account banned,” The Verge, According to Facebook, the platform issues warnings to users who post content or misuse its features in a way that violates community standards.248“Warnings,” Facebook, accessed October 2020, In an email to PEN America, Facebook elaborated: “Time-bound feature limits are the central penalty used on Facebook users. If Facebook removes multiple pieces of content from a user’s profile, Page, or group within a short period of time, they’ll have a short-term restriction placed on their account. This user will continue to receive additional and longer restrictions as long as they keep on violating our Community Standards. Other less frequent penalties include rate limits, education requirements, audience limitation, loss of certain product features (such as the ability to go Live).”249Email response from Facebook spokesperson, January 21, 2021.

Of the platforms analyzed in this report, only Twitter publicly outlines its penalties in its help center. But to better understand Twitter’s accountability system, a user would have to read several distinct policy pages, all of which make heavy use of the word “may.” PEN America could find only the most minimal information on Facebook’s or Instagram’s help centers that lays out each platform’s penalties for violating their policies; we cobbled together the information above from a news article,250Jacob Kastrenakes, “Instagram will now warn users close to having their account banned,” The Verge, an email exchange with both platforms,251Email response from Instagram spokesperson, January 15, 2021; Email response from Facebook spokesperson, January 21, 2021. and a corporate blog post from 2018.252“Enforcing Our Community Standards,” Facebook, accessed February 2021, Facebook and Instagram informed PEN America that they do not “provide full visibility into the penalties to avoid gaming.”253Email response from Instagram spokesperson, January 15, 2021; Email response from Facebook spokesperson, January 21, 2021. Bottom line: It is exceptionally difficult for users to understand the consequences of violating platform policies, even if they can ascertain what those policies are in the first place.

Beyond the platforms analyzed in this report, YouTube offers a model worth exploring. The platform overhauled its penalties into a system of escalating “strikes” after consulting with its users in 2019.254The YouTube Team, “Making Our Strikes System Clear and Consistent,” Youtube Official Blog, February 19, 2019, When a user violates community guidelines for the first time, they receive a warning that explains what content was removed and which policies were violated, and explains next steps. If a user posts prohibited content a second time, they receive a “strike,” which restricts account functionality for one week. After the second strike, users experience further functionality restrictions for two weeks, and after three strikes within a 90-day period, any channel continuing to post violating content will, according to YouTube, be deleted. Appeals, the platform states, are available at every step of the process.255“Community Guidelines strike basics,” YouTube, accessed October 2020, The platform claims that “94% of those who do receive a first strike never get a second one,” though it has not released the raw data to back up this assertion.256“Making our strikes systems clear and consistent,” YouTube, accessed December 2, 2020, While YouTube’s administration of this system remains obscure in places—for example, in its use of automated flagging—it provides a useful template to build on.


  • Platforms should create a transparent system of escalating penalties for all users, including warnings, strikes, temporary functionality restrictions, and suspensions, as well as content takedowns and account bans.This accountability system should be fully integrated into the primary user experience and clearly visible alongside policies governing user behavior.
  • Platforms should use the full suite of design elements (nudges, labels, contextual clues, etc.) to communicate clearly and consistently with users across all available channels (within platform, via email, etc.) about what rule has been violated, current and potential future penalties, and next steps, including how to appeal.
  • Platforms should convene a coalition of technology companies, civil society organizations, and vulnerable users—potentially leveraging the newly formed Digital Trust & Safety Partnership257Margaret Harding McGill, “Tech giants list principles for handling harmful content,” Axios, February 13, 2021, create a baseline set of escalating penalties that can help establish common expectations, which could include:
    • Warnings and strikes. Platforms should adopt a graduated approach to enforcement, issuing warnings and counting strikes before taking the more drastic step of suspending or closing accounts. That said, some violations, like direct incitement to violence, warrant immediate suspension.
    • Temporary suspensions and functionality limitations. Platforms should deploy temporary suspensions and functionality limitations—such as preventing accounts from posting but not from browsing, and suspending accounts for several days or weeks.
    • Coordinated protocols for adjusting these escalating penalties over time, including in response to evolving patterns of abuse or the weaponization of the penalties themselves.

Appeals: Ensuring a transparent and expeditious process

The challenge: Not all content reported as abuse is actually abusive. Reporting systems are regularly weaponized by abusers seeking to intimidate or defame their targets and trigger the removal of posts and suspension of accounts.258Sam Biddle, “Facebook Lets Vietnam’s Cyberarmy Target Dissidents, Rejecting a Celebrity’s Plea,” The Intercept, December 12, 2020,; Katie Notopoulos, “How Trolls Locked My Twitter Account For 10 Days, And Welp,” BuzzFeed News, December 2, 2017 Russell Brandom, “Facebook’s Report Abuse button has become a tool of global oppression,” The Verge, September 2, 2014,; Ariana Tobin, Madeline Varner, Julia Angwin, “Facebook’s Uneven Enforcement of Hate Speech Rules Allows Vile Posts to Stay Up,” ProPublica, December 18, 2017, Further, perceptions of what does or does not constitute abuse vary among individuals and communities. Content moderators make decisions that reasonable people can disagree with, and content that is flagged in good faith may fall short of violating platform policies.259Jodie Ginsberg, “Social Media Bans Don’t Just Hurt Those You Disagree With—Free Speech Is Damaged When the Axe Falls Too Freely,” The Independent, May 17, 2019,; Queenie Wong, “Is Facebook censoring conservatives or is moderating just too hard?,” CNET, October 29, 2019 Many users who believe their content was removed as a result of inaccurate or malicious reporting struggle to understand why, have little opportunity to make their case, and can effectively be silenced by the slow restoration of their content.260Jillian C. York, “Companies Must Be Accountable to All Users: The Story of Egyptian Activist Wael Abbas,” Electronic Frontier Foundation, February 13, 2018,

As platforms make much-needed improvements in the ability to flag and remove abusive content, contentious decisions and false positives will inevitably increase. Organizations like the ACLU261Lee Rowland, “Naked Statue Reveals One Thing: Facebook Censorship Needs Better Appeals Process,” ACLU, September 25, 2013, and Ranking Digital Rights262“2019 RDR Corporate Accountability Index,” Ranking Digital Rights, accessed December 2, 2020 have argued that a transparent, expeditious appeals process for content takedowns is critical for the preservation of free expression in any content moderation process. PEN America supports the adoption of the Santa Clara Principles. Released in 2018 by a coalition of civil society organizations and academics, this proposal calls for the review of appealed content by human beings, the opportunity for users to provide context during the appeals process, and explicit notification of the final outcome, including a clear explanation of the decision.263“The Santa Clara Principles on Transparency and Accountability in Content Moderation,” accessed December 2, 2020,

Existing tools and mechanisms: Twitter, Facebook, and Instagram have made improvements to their appeals process in recent years. In 2018, Twitter committed to emailing suspended users to advise them of the content of violating tweets and details about which rule they broke.264“Toxic Twitter-A Toxic Place for Women,” Amnesty International, 2018, In 2019, the platform integrated appeals directly into its mobile app, rather than requiring users to fill out a separate online form, which Twitter claims improved its response speed by 60 percent.265Sarah Perez, “Twitter now lets users appeal violations within its app” TechCrunch, April 2, 2019, In 2020, Instagram also integrated the ability to appeal disabled accounts and content takedowns directly within its app.266Andrew Hutchinson, “Instagram Launches New Appeals Process for Disabled Accounts, Adds Report Tracking In-App.” Social Media Today, February 12, 2020,; “My Instagram Account was Deactivated,” Instagram, accessed October 2020,; “I don’t think Instagram should have taken down my post,” Instagram, accessed December 2, 2020,  Since 2018, Facebook users have had the ability to appeal the removal of posts, photos, and videos, as well as the removal of groups, profiles, and pages267“Facebook Updates Community Standards, Expands Appeals Process,” NPR, April 24, 2018,—though users still have to appeal through a separate online form.268“Why would my Facebook Page get taken down or have limits placed on it?,” Facebook, accessed October 2020, ; “My personal Facebook account is disabled,” Facebook, accessed December 2, 2020,; “I don’t think Facebook should have taken down my post,” Facebook, December 2, 2020, Both Facebook and Instagram now allow users to escalate appeals for content takedowns to a new, purportedly independent oversight board, which will review a small fraction of submissions.269“How do I appeal Instagram’s decision to the Oversight Board?,” Instagram, accessed December 2, 2020,; “How do I appeal Facebook’s content decision to the Oversight Board?,” Facebook Help Center, accessed February 25, 2021,

Despite recent improvements, the platforms still have a long way to go in the transparency, expeditiousness, and open communication of appeals. As Amnesty International’s “Toxic Twitter” report notes, “A detailed overview of the appeals process, including an explicit commitment to respond to all appeals or a timeframe of when to expect a response is not included in any of Twitter’s policies.”270“Toxic Twitter-A Toxic Place for Women,” Amnesty International, 2018, The EFF’s annual “Who Has Your Back” report found that only Facebook has a satisfactory commitment to providing “meaningful notice” regarding content and account takedowns and that neither Twitter, Facebook nor Instagram has a satisfactory commitment to “appeals transparency.”271Gennie Gebhart, “Who Has Your Back? Censorship Edition 2019,” Electronic Frontier Foundation, November 7, 2019, Twitter does not include appeals in its transparency report at all.272“Rules Enforcement,” Twitter, accessed December 2, 2020 While Facebook and Instagram provide information about how much content users appeal and how much of the content is restored, they offer no information about the timeliness of responses or restorations.273“Community Standards Enforcement Report,” Facebook, accessed December 2, 2020

Recommendations: PEN America recommends the implementation of significantly more robust and regularized appeals processes for users whose content or accounts have been taken down, restricted, or suspended. Specifically, platforms should:

  • Fully and prominently integrate appeals into the primary user experience, and communicate clearly and regularly with users at every step of the appeals process via notifications within the platform’s desktop and mobile app, as well as through secondary communication channels, like email.
  • Build on the Santa Clara Principles274“The Santa Clara Principles on Transparency and Accountability in Content Moderation,” accessed December 10, 2020, to ensure that users can add context during the appeals process and that humans review appealed content.
  • Create a formal, adequately resourced escalation channel for expediting appeals to address cases of malicious or inaccurate content takedowns and time-sensitive cases where a delay in restoring content or accounts could be harmful, for example during times of urgent political debate or crises. At the very least, this channel should enable an institution, such as a news outlet or civil society organization, to advocate with the platform on behalf of an individual, such as a journalist or human rights defender.
  • Substantially increase transparency about the way the appeals process works. Specifically, regular transparency reports from platforms should include metrics on how much content users appeal, how much of the appealed content is restored, the timeliness of responses for all appeals, and the timeliness of restoration. Platforms should provide independent researchers with disaggregated data supporting these metrics.
Protesters marching in Washington, D.C. in November 2016.
Protesters marching in Washington, D.C. in November 2016. Photo by Lorie Shaull


In this report, PEN America lays out the impact of online abuse on the lives, livelihoods, and freedom of expression of writers and journalists and recommends concrete changes that technology companies should make now to better protect all vulnerable people. Our recommendations center on the experiences and needs of United States-based users disproportionately targeted online for their identity and profession and prioritize changes to the design of digital platforms. We base our proposals on in-depth qualitative research, including dozens of interviews and a comprehensive literature review, and on the extensive experience that PEN America has gleaned through its Online Abuse Defense program.

What is online abuse?

PEN America defines online abuse as the “severe or pervasive targeting of an individual or group online with harmful behavior.” “Severe” because a single incident of online abuse, such as a death threat or the publication of a home address, can have serious consequences. “Pervasive” because individual incidents, such as an insult or spamming, may not rise to the level of abuse, but a sustained or coordinated onslaught of incidents like these can cause significant harm. “Harm” can include emotional distress, anxiety, intimidation, humiliation, invasion of privacy, the chilling of expression, professional damage, fear for physical safety, and physical violence.”275“Defining ‘Online Abuse’: A Glossary of Terms,” Online Harassment Field Manual, accessed January 2021,

PEN America’s Online Abuse Defense Program

PEN America is a nonprofit that stands at the intersection of literature and human rights to protect free expression in the United States and worldwide. Our mission is to unite writers and their allies to celebrate creative expression and defend the liberties that make it possible. Our Membership consists of over 7,500 journalists, novelists, nonfiction writers, editors, poets, essayists, playwrights, publishers, translators, agents, and other writing professionals, as well as devoted readers and supporters throughout the United States. In 2017, PEN America conducted a survey of over 230 writers and journalists within our network and found that the majority of respondents who had faced online abuse reported fearing for their safety and engaging in self-censorship; this included everything from refraining from publishing their work to deleting their social media accounts.276“Online Harassment Survey: Key Findings​,” PEN America (blog), April 17, 2018, In response, in 2018 we launched our Online Abuse Defense program in the United States, which centers on education, research, and advocacy. We develop resources to equip writers and journalists, as well as their allies and employers, with comprehensive strategies to defend themselves against online abuse. Our Field Manual, articles, and tipsheets have reached over 250,000 people.277“Online Harassment Field Manual,” PEN America, accessed March 2021,; Viktorya Vilk, “What to Do When Your Employee Is Harassed Online,” Harvard Business Review, July 31, 2020,; Viktorya Vilk, “What to Do if You’re the Target of Online Harassment,” Slate, June 3, 2020,; Viktorya Vilk, “Why You Should Dox Yourself (Sort Of),” Slate, February 28, 2020, The authors of this report have led presentations and workshops on combating online abuse and bolstering digital safety for over 7,000 journalists, writers, editors, academics, lawyers, activists, and others; we also work closely with newsrooms, publishing companies, and professional associations to develop policies, protocols, and training to protect and support writers and journalists. Finally, we conduct research on the impact of online abuse and the solutions to address it, and we advocate for change to reduce online harm so that all creative and media professionals can continue to express themselves freely.

Research Methodology

For this report, PEN America conducted in-depth interviews between April 2020 and February 2021 with over 50 people, including writers and journalists; editors and newsrooms leaders; experts in online abuse and digital safety; researchers and academics who study media, UX, and design; technologists; lawyers; and representatives of technology companies. We conducted a comprehensive cross-disciplinary literature review of over 100 articles, reports, papers, books, and guidelines from academia and civil society, in fields including design, computer science, sociology, human rights, and technology.

We centered our research on the experiences of people disproportionately targeted by online abuse for their identity and/or profession, specifically: 1) writers and journalists whose work requires a public presence online, and 2) women, BIPOC (Black, indigenous, and people of color), LGBTQIA+ (lesbian, gay, bisexual, transgender, queer, intersex, and asexual) people, and/or people who belong to religious or ethnic minorities.278In a 2020 study from the Anti Defamation League and YouGov, 35 percent of respondents reported that the harassment they faced was connected to their gender identity, race or ethnicity, sexual orientation, religion, or disability. Among these groups, respondents who identified as LGBTQ+ reported the highest rates of harassment. Women also cited disproportionate levels of harassment, including more than three times the gender-based harassment experienced by men (37 percent versus 12 percent). “Online Hate and Harassment Report: The American Experience 2020,” ADL, June 2020, In our recommendations, we prioritize the needs of people at the intersection of these two groups because we know that they experience especially egregious abuse.279Michelle P. Ferrier, “Attacks and Harassment: The Impact on Female Journalists and Their Reporting,” IWMF/TrollBusters, 2018,; see also Lucy Westcott, “‘The threats follow us home’: Survey details risks for female journalists in U.S., Canada,” CPJ, September 4, 2019,; for global stats, see also: Julie Posetti et al., “Online violence Against Women Journalists: A Global Snapshot of Incidence and Impacts,” UNESCO, December 1 2020,; “Troll Patrol Findings,” Amnesty International, 2018, We contend that when technology companies meet the needs of users most vulnerable to online abuse, they will better serve all of their users.

We focus our analysis specifically on Twitter, Facebook, and Instagram because these are the platforms on which United States–based writers and journalists rely most heavily in their work,280Michelle P. Ferrier, “Attacks and Harassment: The Impact on Female Journalists and Their Reporting (Rep.),” IWMF/TrollBusters, 2018,; “Why journalists use social media,” NewsLab, 2018,,media%20in%20their%20daily%20work.&text=About%2073%20percent%20of%20the,there%20is%20any%20breaking%20news; “2017 Global Social Journalism Study,” Cision, accessed February 19, 2021, and the platforms on which United States–based users report experiencing the most abuse.281“Online Hate and Harassment Report: The American Experience 2020,” ADL, June 2020,; see also Emily A. Vogels, “The State of Online Harassment,” Pew Research Center, January 13, 2021, We analyzed in-platform features designed to mitigate online abuse (such as blocking, muting, hiding, restricting, and reporting). And we also identified and analyzed relevant third-party tools, some built by private companies and others by universities and nonprofits (such as Block Party, Tall Poppy, BodyGuard, Sentropy Protect, TweetDeleter, Jumbo, Tune, and many others). Although writers and journalists also experience a significant amount of abuse on private messaging platforms (such as email, text messaging, WhatsApp, and Facebook Groups), these platforms carry their own unique privacy and security challenges and fall outside the scope of this report. While our recommendations are rooted in our research on three major social media platforms, we believe they are useful and relevant to all technology companies that design products to facilitate communication and social interaction.

Our research and recommendations focus on the United States, where PEN America’s expertise in online abuse is strongest, but we fully acknowledge that online abuse is a global problem and we understand the urgent need to find locally and regionally relevant solutions. Several of the technology companies analyzed in this report have a global user base, and one of the central challenges to curtailing online abuse is the blanket application of United States–based rules, strategies, and cultural norms internationally.282“Activists and tech companies met to talk about online violence against women: here are the takeaways,” Web Foundation, August 10, 2020, Throughout this report, we endeavor to account for the ways that changes to features on global platforms could play out in regions and geopolitical contexts outside the United States.


This report was written by Viktorya Vilk, program director for Digital Safety and Free Expression; Elodie Vialle, program consultant for Digital Safety and Free Expression; and Matt Bailey, program director for Digital Freedom at PEN America. PEN America’s senior director for Free Expression Programs, Summer Lopez, reviewed and edited the report, as did CEO Suzanne Nossel. James Tager, Nora Benavidez, Stephen Fee, and Dru Menaker provided thoughtful feedback. PEN America would also like to thank the interns whose research, fact-checking, and proofreading contributed significantly to this report: Jazilah Salam, Margaret Tilley, Hiba Ismail, Sara Gronich, Tarini Krishna, Blythe Drucker, Jordan Pilant, Glynnis Eldridge, and Cheryl Hege.

PEN America extends special thanks to the following experts for providing invaluable input on this report: Jami Floyd, senior editor of the Race and Justice unit at New York Public Radio; Dr. Michelle Ferrier, founder of TrollBusters and executive director of Media Innovation Collaboratory; Kat Lo, content moderation lead at Meedan; Azmina Dhrodia, senior policy manager for gender and data rights at World Wide Web Foundation and adviser at Glitch; Jamia Wilson, vice president and executive editor at Random House; Ela Stapley, digital safety adviser and founder of Siskin Labs; T. Annie Nguyen, lead product designer and design researcher; and Jillian York, director for international freedom of expression at the Electronic Frontier Foundation. PEN America is also deeply grateful to the many journalists, writers, scholars, technologists, psychologists, civil society advocates, lawyers, and other experts who agreed to be interviewed for this report, including those who are not acknowledged by name. PEN America appreciates the responsiveness of the representatives at Twitter, Facebook, Instagram, and Google in our many exchanges, as well as the generosity and openness of the founders and staff of the many third-party tools we examined for this report.

Our deep abiding appreciation goes to the Democracy Fund and Craig Newmark Philanthropies for their support of this project. PEN America also receives financial support from Google and Facebook, but those funds did not support the research, writing, or publication of this report.

The report was edited by Susan Chumsky. Melissa Joskow, communications assistant at PEN America, did graphic design.