No Excuse for Abuse

What Social Media Companies Can Do Now to Combat Online Harassment and Empower Users

Key Findings

Online abuse strains the mental and physical health of its targets, and can cause people to self-censor, avoid certain subjects, or leave their professions altogether.

When online abuse drives women, LGBTQIA+, BIPOC, and minority writers and journalists to leave industries that are predominantly male, heteronormative, and white, public discourse becomes less open and less free.

Social media platforms should adopt proactive measures that empower users to reduce risk and minimize exposure, reactive measures that facilitate response and alleviate harm; and accountability measures that deter abusive behavior.

Writer and PEN America Trustee Jennifer Finney Boylan draws a connection between Facebook, Twitter, and the World’s Most Dangerous Amusement Park

Read the letter >

PEN America Experts:

Director, Digital Safety and Free Expression

Senior Advisor, Online Abuse Defense Program

Introduction

Online abuse—from violent threats and hateful slurs to sexual harassment, impersonation, and doxing—is a pervasive and growing problem.1PEN America defines online abuse as the “severe or pervasive targeting of an individual or group online with harmful behavior.” PEN America defines doxing as the “publishing of sensitive personal information online—including home address, email, phone number, social security number, photos, etc.—to harass, intimidate, extort, stalk, or steal the identity of a target.” “Defining ‘Online Abuse’: A Glossary of Terms,” Online Harassment Field Manual, accessed January 2021, onlineharassmentfieldmanual.pen.org/defining-online-harassment-a-glossary-of-terms/ Nearly half of Americans report having experienced it,2“Online Hate and Harassment Report: The American Experience 2020,” ADL, June 2020, adl.org/online-hate-2020; see also Emily A. Vogels, “The State of Online Harassment,” Pew Research Center, January 13, 2021, pewresearch.org/internet/2021/01/13/the-state-of-online-harassment/ and two-thirds say they have witnessed it.3Maeve Duggan, “Online Harassment 2017: Witnessing Online Harassment,” Pew Research Center, July 11, 2017, www.pewresearch.org/internet/2017/07/11/witnessing-online-harassment/ But not everyone is subjected to the same degree of harassment. Certain groups are disproportionately targeted for their identity and profession. Because writers and journalists conduct so much of their work online and in public, they are especially susceptible to such harassment.4Michelle P. Ferrier, “Attacks and Harassment: The Impact on Female Journalists and Their Reporting (Rep.),” IWMF/TrollBusters, 2018, iwmf.org/wp-content/uploads/2018/09/Attacks-and-Harassment.pdf; “Why journalists use social media,” NewsLab, 2018, newslab.org/journalists-use-social-media/#:~:text=The%20researchers%20found%20that%20eight,media%20in%20their%20daily%20work.&text=About%2073%20percent%20of%20the,there%20is%20any%20breaking%20news Among writers and journalists, the most targeted are those who identify as women, BIPOC, LGBTQIA+, and/or members of religious or ethnic minorities.5For impact on female journalists internationally, see Ibid. and Julie Posetti et al., “Online violence Against Women Journalists: A Global Snapshot of Incidence and Impacts,” UNESCO, December 1 2020, icfj.org/sites/default/files/2020-12/UNESCO%20Online%20Violence%20Against%20Women%20Journalists%20-%20A%20Global%20Snapshot%20Dec9pm.pdf; For impact on women and gender nonconforming journalists in the U.S. and Canada, see: ; see also Lucy Westcott, “‘The threats follow us home’: Survey details risks for female journalists in U.S., Canada,” CPJ, September 4, 2019, cpj.org/2019/09/canada-usa-female-journalist-safety-online-harassment-survey/; For impact on women of color, including journalists, see: “Troll Patrol Findings,” Amnesty International, 2018, decoders.amnesty.org/projects/troll-patrol/findings Online abuse is intended to intimidate and censor. When voices are silenced and expression is chilled, public discourse suffers. By reducing the harmful impact of online harassment, platforms like Twitter, Facebook, and Instagram can ensure that social media becomes more open and equitable for all users. In this report, PEN America proposes concrete, actionable changes that social media companies can and should make immediately to the design of their platforms to protect people from online abuse—without jeopardizing free expression.

If you’re going to be a journalist, there is an expectation to be on social media. I feel that I have no choice. The number of followers is something employers look at. This is unfair, because there are not a lot of resources to protect you. No matter what I say about race, there will be some blowback. Even if I say nothing, when my colleague who is a white man takes positions on racism, trolls come after me on social media.

Jami Floyd, senior editor of the Justice and Race Unit at New York Public Radio

The devastating impact of online abuse

Writers and journalists are caught in an increasingly untenable double bind. They often depend on social media platforms—especially Twitter, Facebook, and Instagram—to conduct research, connect with sources, keep up with breaking news, promote and publish their stories, and secure professional opportunities.6“Why journalists use social media,” NewsLab, 2018, newslab.org/journalists-use-social-media/#:~:text=The%20researchers%20found%20that%20eight,media%20in%20their%20daily%20work.&text=About%2073%20percent%20of%20the,there%20is%20any%20breaking%20news; “2017 Global Social Journalism Study,” Cision, accessed February 19, 2021, cision.com/content/dam/cision/Resources/white-papers/SJS_Interactive_Final2.pdf Yet their visibility and the very nature of their work—in challenging the status quo, holding the powerful accountable, and sharing analysis and opinions—can make them lightning rods for online abuse, especially if they belong to frequently targeted groups and/or if they cover beats such as feminism, politics, or race.7Gina Massulo Chen et al., “‘You really have to have thick skin’: A cross-cultural perspective on how online harassment influences female journalists,” Journalism 21, no. 7 (2018), doi.org/10.1177/1464884918768500“If you’re going to be a journalist, there is an expectation to be on social media. I feel that I have no choice. The number of followers is something employers look at,” says Jami Floyd, senior editor of the Justice and Race Unit at New York Public Radio. “This is unfair, because there are not a lot of resources to protect you. No matter what I say about race, there will be some blowback. Even if I say nothing, when my colleague who is a white man takes positions on racism, trolls come after me on social media.”8Jami Floyd, interview with PEN America, June 6, 2020.

A 2018 study conducted by TrollBusters and the International Women’s Media Foundation (IWMF) found that 63 percent of women media workers in the United States have been threatened or harassed online at least once,9Michelle P. Ferrier, “Attacks and Harassment: The Impact on Female Journalists and Their Reporting,” IWMF/TrollBusters, 2018, iwmf.org/wp-content/uploads/2018/09/Attacks-and-Harassment.pdf; see also Lucy Westcott, “‘The threats follow us home’: Survey details risks for female journalists in U.S., Canada,” CPJ, September 4, 2019, cpj.org/2019/09/canada-usa-female-journalist-safety-online-harassment-survey/; for global stats, see also: Julie Posetti et al., “Online violence Against Women Journalists: A Global Snapshot of Incidence and Impacts,” UNESCO, December 1 2020, icfj.org/sites/default/files/2020-12/UNESCO%20Online%20Violence%20Against%20Women%20Journalists%20-%20A%20Global%20Snapshot%20Dec9pm.pdf; (73 percent of the female journalists who responded to this global survey said they had experienced online abuse, harassment, threats and attacks.) a number significantly higher than the national average for the general population.10Emily A. Vogels, “The State of Online Harassment,” Pew Research Center, January 13, 2021, pewresearch.org/internet/2021/01/13/the-state-of-online-harassment/ Often women are targeted in direct response to their identities.11Women cited disproportionate levels of harassment, including more than three times the gender-based harassment experienced by men (37 percent versus 12 percent). “Online Hate and Harassment Report: The American Experience 2020,” ADL, June 2020, adl.org/online-hate-2020 “I am often harassed online when I cover white nationalism and anti-Semitism, especially in politics or when perpetrated by state actors,” says Laura E. Adkins, a journalist and opinion editor of the Jewish Telegraphic Agency. “My face has even been photoshopped into an image of Jews dying in the gas chambers.”12Laura E. Adkins, interview with PEN America, June 15, 2020. Individuals at the intersection of multiple identities, especially women of color, experience the most abuse—by far.13A 2018 study from Amnesty International found that women of color—Black, Asian, Hispanic, and mixed-race women—are 34 percent more likely to be mentioned in abusive or problematic tweets than white women; Black women, specifically, are 84 percent more likely than white women to be mentioned in abusive or problematic tweets. “Troll Patrol Findings,” Amnesty International, 2018, decoders.amnesty.org/projects/troll-patrol/findings

The consequences are dire. Online abuse strains the mental and physical health of its targets and can lead to stress, anxiety, fear, and depression.14Michelle P. Ferrier, “Attacks and Harassment: The Impact on Female Journalists and Their Reporting,” IWMF/TrollBusters, 2018, iwmf.org/wp-content/uploads/2018/09/Attacks-and-Harassment.pdf; Lucy Westcott, “‘The Threats Follow Us Home’: Survey Details Risks for Female Journalists in U.S., Canada,” Committee to Protect Journalists, September 4, 2019, cpj.org/2019/09/canada-usa-female-journalist-safety-online-harassment-survey/ In extreme cases, it can escalate to physical violence and even murder.15According to a recent global study of female journalists conducted by UNESCO and the International Center for Journalists (ICFJ), 20 percent of respondents reported that the attacks they experienced in the physical world were directly connected with online abuse. Julie Posetti et al., “Online violence Against Women Journalists: A Global Snapshot of Incidence and Impacts,” UNESCO, December 1 2020, icfj.org/sites/default/files/2020-12/UNESCO%20Online%20Violence%20Against%20Women%20Journalists%20-%20A%20Global%20Snapshot%20Dec9pm.pdf; The Committee to Protect Journalists has reported that 40 percent of journalists who are murdered receive threats, including online, before they are killed. Elisabeth Witchel, “Getting Away with Murder,” CPJ, October 31, 2017, cpj.org/reports/2017/10/impunity-index-getting-away-with-murder-killed-justice-2/ Because the risks to health and safety are very real, online abuse has forced some people to censor themselves, avoid certain subjects, step away from social media,16“Online Harassment Survey: Key Findings,” PEN America, accessed September 2020, pen.org/online-harassment-survey-key-findings/; Mark Lieberman, “A growing group of journalists has cut back on Twitter, or abandoned it entirely,” Poynter Institute, October 9, 2020, poynter.org/reporting-editing/2020/a-growing-group-of-journalists-has-cut-back-on-twitter-or-abandoned-it-entirely/?utm_source=Weekly+Lab+email+list&utm_campaign=0862b74d55-weeklylabemail&utm_medium=email&utm_term=0_8a261fca99-0862b74d55-396347589; “Measuring the prevalence of online violence against women,” The Economist Intelligence Unit, accessed March 2021, onlineviolencewomen.eiu.com/ or leave their professions altogether.17Michelle P. Ferrier, “Attacks and Harassment: The Impact on Female Journalists and Their Reporting,” IWMF/TrollBusters, 2018, iwmf.org/wp-content/uploads/2018/09/Attacks-and-Harassment.pdf Dr. Michelle Ferrier, a journalist who founded the anti-harassment nonprofit TrollBusters after facing relentless racist and sexist abuse online, recalls: “I went to management. I went to the police. I went to the FBI, CIA. The Committee to Protect Journalists took my case to the Department of Justice. Nothing changed. But I did. I changed as a person. I became angrier. More wary and withdrawn. I had police patrolling my neighborhood. I quit my job to protect my family and young children.”18“About us—TrollBusters: Offering Pest Control for Journalists,” TrollBusters, June 2020, yoursosteam.wordpress.com/about/

When online abuse drives women, LGBTQIA+, BIPOC, and minority writers and journalists to leave industries that are predominantly male, heteronormative, and white, public discourse becomes less open and less free.19“What Online Harassment Tells Us About Our Newsrooms: From Individuals to Institutions,” Women’s Media Center, 2020, womensmediacenter.com/assets/site/reports/what-online-harassment-tells-us-about-our-newsrooms-from-individuals-to-institutions-a-womens-media-center-report/WMC-Report-What-Online-Harassment_Tells_Us_About_Our_Newsrooms.pdf Individual harms have systemic consequences: undermining the advancement of equity and inclusion, constraining press freedom, and chilling free expression.

I went to management. I went to the police. I went to the FBI, CIA. The Committee to Protect Journalists took my case to the Department of Justice. Nothing changed. But I did. I changed as a person. I became angrier. More wary and withdrawn. I had police patrolling my neighborhood. I quit my job to protect my family and young children.

Dr. Michelle Ferrier, founder of TrollBusters and executive director of Media Innovation Collaboratory

Shouting into the void: inadequate platform response

Hate and harassment did not begin with the rise of social media. But because sustaining user attention and maximizing engagement underpins the business model of these platforms, they are built to prioritize immediacy, emotional impact, and virality. As a result, they also amplify abusive behavior.20Amit Goldenberg and James J. Gross, “Digital Emotion Contagion,” Harvard Business School, 2020, hbs.edu/faculty/Publication%20Files/digital_emotion_contagion_8f38bccf-c655-4f3b-a66d-0ac8c09adb2d.pdf; Luke Munn, “Angry by design: toxic communication and technical architectures,” Humanities and Social Sciences Communications 7, no. 53 (2020), doi.org/10.1057/s41599-020-00550-7; Molly Crockett, “How Social Media Amplifies Moral Outrage,” The Eudemonic Project, February 9 2020, eudemonicproject.org/ideas/how-social-media-amplifies-moral-outrage In prioritizing engagement over safety, many social media companies  were slow to implement even basic features to address online harassment. When Twitter launched in 2006, users could report abuse only by tracking down and filling out a lengthy form for each individual abusive comment. The platform did not integrate a reporting button into the app until 2013;21Alexander Abad-Santos, “Twitter’s ‘Report Abuse’ Button Is a Good, But Small, First Step,” The Atlantic, July 31, 2013, theatlantic.com/technology/archive/2013/07/why-twitters-report-abuse-button-good-tiny-first-step/312689/; Abby Ohlheiser, “The Woman Who Got Jane Austen on British Money Wants To Change How Twitter Handles Abuse,” Yahoo! News, July 28, 2013, news.yahoo.com/woman-got-jane-austen-british-money-wants-change-024751320.html it offered a block feature (to limit communications with an abuser) early on, but did not provide a mute feature (to hide abusive comments without alerting and possibly antagonizing the abuser) until 2014.22Paul Rosania, “Another Way to Edit your Twitter Experience: With Mute,” Twitter Blog, May 12, 2014, blog.twitter.com/official/en_us/a/2014/another-way-to-edit-your-twitter-experience-with-mute.html While Facebook offered integrated reporting, blocking, and unfriending features within several years of its launch in 2004,23“Facebook Customer Service: Abuse,” Wayback Machine, December 2005, accessed March 2021, web.archive.org/web/20051231101754/http://facebook.com/help.php?tab=abuse it has since lagged behind in adding new features designed to address abuse. The platform only enabled users to ignore abusive accounts in direct messages in 2017 and to report abuse on someone else’s behalf in 2018.24Mallory Locklear, “Facebook introduces new tools to fight online harassment,” Engadget, December 19, 2017, engt.co/3qnl0Se; Antigone Davis, “New Tools to Prevent Harassment,” About Facebook, December 19, 2017, about.fb.com/news/2017/12/new-tools-to-prevent-harassment/; Antigone Davis, “Protecting People from Bullying and Harassment,” About Facebook, October 2, 2018, about.fb.com/news/2018/10/protecting-people-from-bullying/ When it launched in 2010, Instagram also required users to fill out a separate form to report abuse, and its rudimentary safety guidelines advised users to manually delete any harassing comments.25“User Disputes,” WayBack Machine, 2011, accessed February 16, 2021, web.archive.org/web/20111018040638/help.instagram.com/customer/portal/articles/119253-user-disputes Since 2016, the platform has gradually ramped up its efforts to address online harassment, pulling ahead of Facebook and Twitter, although Instagram did not actually introduce a mute button until 2018.26Megan McCluskey “Here’s How You Can Mute Someone on Instagram Without Unfollowing Them,” Time, May 22 2018, time.com/5287169/how-to-mute-on-instagram/

All of these features were added only after many women, people of color, religious and ethnic minorities, and LGBTQIA+ people, including journalists and politicians, had endured countless high-profile abuse campaigns and spent years advocating for change, applying pressure, and generating public outrage.27Alexandra Abad-Santos, “Twitter’s ‘Report Abuse’ Button Is a Good, But Small, First Step,” The Atlantic, July 31, 2013, theatlantic.com/technology/archive/2013/07/why-twitters-report-abuse-button-good-tiny-first-step/312689/; Amanda Marcotte, “Can These Feminists Fix Twitter’s Harassment Problem?,” Slate, November 7, 2014, slate.com/human-interest/2014/11/women-action-media-and-twitter-team-up-to-fight-sexist-harassment-online.html This timeline indicates a much broader issue in tech: in an industry that has boasted of its willingness to “move fast and break things,”28Chris Velazco, “Facebook can’t move fast to fix the things it broke,” Engadget, April 12, 2018, engadget.com/2018-04-12-facebook-has-no-quick-solutions.html efforts to protect vulnerable users are just not moving fast enough.

Users have noticed. According to a 2021 study from Pew Research Center, nearly 80 percent of Americans believe that social media companies are not doing enough to address online harassment.29Emily A. Vogel, “The State of Online Harassment,” Pew Research Center, January 13, 2021, pewresearch.org/internet/2021/01/13/the-state-of-online-harassment/ Many of the experts and journalists PEN America consulted for this report concurred. Jaclyn Friedman, a writer and founder of Women, Action & the Media who has advocated with platforms to address abuse, says she often feels like she’s “shouting into a void because there’s no transparency or accountability.”30Jaclyn Friedman, interview with PEN America, May 28, 2020.

Protest against Facebook’s role in spreading online harm in San Francisco in November 2020. Photo by AP/Jeff Chiu

There is a growing international consensus that the private companies that maintain dominant social media platforms have a responsibility, in accordance with international human rights law and principles, to reduce the harmful impact of abuse on their platforms and ensure that they remain conducive to free expression.31Susan Benesch, “But Facebook’s Not a Country: How to Interpret Human Rights Law for Social Media Companies,” Yale Journal on Regulation Online Bulletin 3 (September 14, 2020), digitalcommons.law.yale.edu/cgi/viewcontent.cgi?article=1004&context=jregonline According to the United Nations’ Guiding Principles for Business and Human Rights (UNGPs), corporations must “avoid infringing on the human rights of others and should address adverse human rights impacts with which they are involved.”32“Guiding Principles on Business and Human Rights,” United Nations Human Rights Office of the High Commissioner, 2011, ohchr.org/documents/publications/guidingprinciplesbusinesshr_en.pdf In March 2021, Facebook released a Corporate Human Rights Policy rooted in the UNGPs, which makes an explicit commitment to protecting the safety of human rights defenders, including “professional and citizen journalists” and “members of vulnerable groups advocating for their rights,” from online attacks.33 “Corporate Human Rights Policy,” Facebook, accessed March 2021, about.fb.com/wp-content/uploads/2021/03/Facebooks-Corporate-Human-Rights-Policy.pdf The UNGPs further mandate that states must ensure that corporations live up to their obligations, which “requires taking appropriate steps to prevent, investigate, punish and redress such abuse through effective policies, legislation, regulations and adjudication.”34“Guiding Principles on Business and Human Rights,” United Nations Human Rights Office of the High Commissioner, 2011, ohchr.org/documents/publications/guidingprinciplesbusinesshr_en.pdf

I am often harassed online when I cover white nationalism and anti-Semitism, especially in politics or when perpetrated by state actors. My face has even been photoshopped into an image of Jews dying in the gas chambers.

Laura E. Adkins, journalist and opinion editor of the Jewish Telegraphic Agency

What can platforms do now to reduce the burden of online abuse?

In this report, PEN America asks: What can social media companies do now to ensure that users disproportionately impacted by online abuse receive better protection and support? How can social media companies build safer spaces online? How can technology companies, from giants like Facebook and Twitter to small startups, design in-platform features and third-party tools that empower targets of abuse and their allies and disarm abusive users, while preserving free expression? What’s working, what can be improved, and where are the gaps? Our recommendations include both proactive measures that empower users to reduce risk and minimize exposure and reactive measures that facilitate response and alleviate harm; and accountability measures that deter abusive behavior. 

Among our principal recommendations, we propose that social media companies should:

  • Build shields that enable users to proactively filter abusive content (across feeds, threads, comments, replies, direct messages, etc.) and quarantine it in a dashboard, where they can review and address it with the help of trusted allies.
  • Enable users to assemble rapid response teams and delegate account access, so that trusted allies can jump in to provide targeted assistance, from mobilizing supportive communities to helping document, block, mute, and report abuse.
  • Create a documentation feature that allows users to quickly and easily record evidence of abuse—capturing screenshots, hyperlinks, and other publicly available data automatically or with one click—which is critical for communicating with employers, engaging with law enforcement, and pursuing legal action.
  • Create safety modes that make it easier to customize privacy and security settings, visibility snapshots that show how adjusting settings impacts reach, and identities that enable users to draw boundaries between the personal and the professional with just a few clicks.
  • For extreme or overwhelming abuse, create an SOS button that users could activate to instantly trigger additional in-platform protections and an emergency hotline (phone or chat) that provides personalized, trauma-informed support in real time.
  • Create a transparent system of escalating penalties for abusive behavior—including warnings, strikes, nudges, temporary functionality limitations, and suspensions, as well as content takedowns and account bans—and spell out these penalties for users every step of the way.


Our proposals are rooted in the experiences of writers and journalists who identify as women, BIPOC, LGBTQIA+, and/or members of religious or ethnic minorities in the United States, where PEN America’s expertise on online abuse is strongest. We recognize, however, that online abuse is a global problem and endeavor to note the risks and ramifications of applying strategies conceived in and for the United States internationally.35“Activists and tech companies met to talk about online violence against women: here are the takeaways,” Web Foundation, August 10, 2020, webfoundation.org/2020/08/activists-and-tech-companies-met-to-talk-about-online-violence-against-women-here-are-the-takeaways/ We focus on Twitter, Facebook, and Instagram—because United States-based writers and journalists rely on these platforms most in their work,36Michelle P. Ferrier, “Attacks and Harassment: The Impact on Female Journalists and Their Reporting (Rep.),” IWMF/TrollBusters, 2018, iwmf.org/wp-content/uploads/2018/09/Attacks-and-Harassment.pdf; “Why journalists use social media,” NewsLab, 2018, newslab.org/journalists-use-social-media/#:~:text=The%20researchers%20found%20that%20eight,media%20in%20their%20daily%20work.&text=About%2073%20percent%20of%20the,there%20is%20any%20breaking%20news; “2017 Global Social Journalism Study,” Cision, accessed February 19, 2021, cision.com/content/dam/cision/Resources/white-papers/SJS_Interactive_Final2.pdf and because it is on these platforms that United States-based users report experiencing the most abuse.37“Online Hate and Harassment Report: The American Experience 2020,” ADL, June 2020, adl.org/online-hate-2020; see also Emily A. Vogels, “The State of Online Harassment,” Pew Research Center, January 13, 2021, pewresearch.org/internet/2021/01/13/the-state-of-online-harassment/ But our recommendations are relevant to all technology companies that design products to facilitate communication and social interaction.

We draw a distinction between casual and committed abuse: the former is more organic and plays out primarily among individuals; the latter is premeditated, coordinated, and perpetrated by well-resourced groups. We make the case that technology companies need to better protect and support users facing both day-to-day abuse and rapidly escalating threats and harassment campaigns. While we propose tools and features that can both disarm abusers and empower targets and their allies, we recognize that the lines between abuser, target, and ally are not always clear-cut. In a heavily polarized environment, online abuse can be multidirectional. Though abusive trolls are often thought of as a “vocal and antisocial minority,” researchers at Stanford and Cornell Universities stress that “anyone can become a troll.”38Justin Cheng et al., “Anyone Can Become a Troll: Causes of Trolling Behavior in Online Discussions,” 2017, cs.stanford.edu/~jure/pubs/trolling-cscw17.pdf Research conducted in the gaming industry found that the vast majority of toxicity came not from committed repeat abusers but from regular users “just having a bad day.”39Jeffrey Lin, “Doing Something About The ‘Impossible Problem’ of Abuse in Online Games,” Vox, July 7, 2015, accessed February 16, 2021, vox.com/2015/7/7/11564110/doing-something-about-the-impossible-problem-of-abuse-in-online-gamesBecause a user can be either an abuser or a target at any time, tools and features designed to address online abuse must approach it as a behavior—not an identity.

No single strategy to fight online abuse will be perfect or future-proof. Any tool or feature for mitigating online abuse could have unintended consequences or be used in ways counter to its intended purpose. “You have to design all of these abuse reporting tools with the knowledge that they are going to be misused,” explains Leigh Honeywell, co-founder and CEO of Tall Poppy, a company that provides protection for individuals and institutions online.40Leigh Honeywell, interview with PEN America, May 15, 2020. Ensuring that systems are designed to empower users rather than simply prohibit bad behavior can help mitigate those risks, preserving freedom while also becoming more resilient to evolving threats.

If technology companies are serious about reducing the harm of online abuse, they must prioritize understanding the experiences and meeting the needs of their most targeted users. Every step of the way, platforms need to “center the voices of those who are directly impacted by the outcome of the design process,”41Sasha Costanza-Chock, Design Justice: Community-Led Practices to Build the Worlds We Need (MIT Press, 2020) argues Dr. Sasha Costanza-Chock, associate professor of civic media at MIT. Moreover, to build features and tools that address the needs of vulnerable communities, technology companies need staff, consultation, and testing efforts that reflect the perspectives and experiences of those communities. Staff with a diverse range of identities and backgrounds need to be represented across the organization—among designers, engineers, product managers, trust and safety teams, etc.—and they need to have the power to make decisions and set priorities. If platforms can build better tools and features to protect writers and journalists who identify as women, BIPOC, LGBTQIA+, and members of religious or ethnic minorities, they can better serve all users who experience abuse.

You can’t have free expression of ideas if people have to worry that they’re going to get doxed or they’re going to get threatened. So if we could focus the conversation on how it is that we can create the conditions for free speech—free speech for reporters, free speech for women, free speech for people of color, free speech for people who are targeted offline—that is the conversation we have to have.

Mary Anne Franks, president of the Cyber Civil Rights Initiative and professor of law at the University of Miami

As an organization of writers committed to defending freedom of expression, PEN America views online abuse as a threat to the very principles we fight to uphold. When people stop speaking out and writing about certain topics due to fear of reprisal, everyone loses. Even more troubling, this threat is most acute when people are trying to engage with some of the most complex, controversial, and urgent questions facing our society—questions about politics, race, religion, gender and sexuality, and domestic and international public policy. Democratic structures depend on a robust, healthy discourse in which every member of society can engage. “You can’t have free expression of ideas if people have to worry that they’re going to get doxed or they’re going to get threatened,” notes Mary Anne Franks, president of the Cyber Civil Rights Initiative and professor of law at the University of Miami. “So if we could focus the conversation on how it is that we can create the conditions for free speech—free speech for reporters, free speech for women, free speech for people of color, free speech for people who are targeted offline—that is the conversation we have to have.”42Mary Anne Franks, interview with PEN America, May 22, 2020.

At the same time, we are leery of giving private companies unchecked power to police speech. Contentious, combative, and even offensive views often do not rise to the level of speech that should be banned, removed, or suppressed. Content moderation can be a blunt instrument. Efforts to combat online harassment that rely too heavily on taking down content, especially given the challenges of implicit bias in both human and automated moderation, risk sweeping up legitimate disagreement and critique and may further marginalize the very individuals and communities such measures are meant to protect. A post that calls for violence against a group or individual, for instance, should not be treated the same as a post that might use similar language to decry that very behavior.43Mallory Locklear, “Facebook is still terrible at managing hate speech,” Engadget, August 3, 2017, engadget.com/2017-08-03-facebook-terrible-managing-hate-speech.html; Tacey Jan, Elizabeth Dwoskin, “A White Man Called Her Kids the N-Word. Facebook Stopped Her from Sharing it.” The Washington Post, July 31st, 2017, wapo.st/2Z40H06 Furthermore, some tools that mitigate abuse can be exploited to silence the marginalized and censor dissenting views. More aggressive policing of content by platforms must be accompanied by stepped-up mechanisms that allow users to appeal and achieve timely resolution in instances where they believe that content has been unjustifiably suppressed or removed. Throughout this report, in laying out our recommendations, we address the tensions that can arise in countering abuse while protecting free expression, and propose strategies to mitigate weaponization and unintended consequences. While the challenges and tensions baked into reducing online harms are real, technology companies have the resources and power to find solutions. Writers, journalists, and other vulnerable users have, for too long, endured relentless abuse on the very social media platforms that they need to do their jobs. It’s time for technology companies to step up.

Empowering Targeted Users and Their Allies

In this section we lay out proactive and reactive measures that platforms can take to empower users targeted by online abuse and their allies. Proactive measures protect users from online abuse before it happens or lessen its impact by giving its targets greater control. Unfortunately proactive measures can sometimes be fraught from a free expression standpoint. Sweeping or sloppy implementation, often rooted in algorithmic and human biases abetted by a lack of transparency, can result in censorship, including of creative and journalistic content.44Scott Edwards, “YouTube removals threaten evidence and the people that provide it,” Amnesty International, November 1, 2017, amnesty.org/en/latest/news/2017/11/youtube-removals-threaten-evidence-and-the-people-that-provide-it/; Jillian C. York, “Companies Must Be Accountable to All Users: The Story of Egyptian Activist Wael Abbas,” Electronic Frontier Foundation, February 13 2018, eff.org/deeplinks/2018/02/insert-better-title-here; Abdul Rahman Al Jaloud et al., “Caught in the Net: The impact of “extremist” speech regulations on Human Rights content,” Electronic Frontier Foundation, Syrian Archive, and Witness, May 30, 2019, eff.org/files/2019/06/03/extremist_speech_regulations_and_human_rights_content_-_eff_syrian_archive_witness.pdf Reactive measures, such as blocking and muting to limit interaction with abusive content, mitigate the harms of online abuse once it is underway but do little to shield targets. Such features sidestep many of the first-order free expression risks associated with proactive measures but are often, on their own, insufficient to protect users from abuse.

It is important to bear in mind that both proactive and reactive measures are themselves susceptible to gaming and weaponization.45Katie Notopoulos, “How Trolls Locked My Twitter Account For 10 Days, And Welp,” BuzzFeed News, December 2, 2017, buzzfeednews.com/article/katienotopoulos/how-trolls-locked-my-twitter-account-for-10-days-and-welp; Tracey Jan, Elizabeth Dwoskin, “A White Man Called Her Kids the N-Word. Facebook Stopped Her from Sharing it.” The Washington Post, July 31st, 2017, wapo.st/2Z40H06; Russell Brandom, “Facebook’s Report Abuse button has become a tool of global oppression,” The Verge, September 2, 2014, theverge.com/2014/9/2/6083647/facebook-s-report-abuse-button-has-become-a-tool-of-global-oppression In many cases, the difference between an effective strategy and an ineffective or overly restrictive one depends not only on policies but also on the specifics of how tools and features are designed and whom they prioritize and serve. Our recommendations aim to strike a balance between protecting those who are disproportionately targeted by online abuse for their identity and profession and safeguarding free expression.

Proactive measures: Reducing risk and exposure

Proactive measures are often more effective than reactive ones because they can protect users from encountering abusive content—limiting their stress and trauma and empowering them to express themselves more freely. They can also enable users to reduce their risk and calibrate their potential exposure by, for example, fine-tuning their privacy and security settings and creating distinctions between their personal and professional identities online.

Today most major platforms provide some proactive protections, but these are often difficult to find, understand, and use. Many of the writers and journalists PEN America works with, including those interviewed for this report, were unaware of existing features and tools and found themselves scrambling to deal with online harassment only after it had been unleashed.  “Young journalists,” says Christina Bellantoni, a professor at the USC Annenberg School for Communication and Journalism, often “don’t familiarize themselves with policies and tools because they don’t predict they will ever face problems. When they do, it’s too late. Tools to help young journalists learn more about privacy settings from the outset would go a long way.”46Christina Bellantoni, email to PEN America, January 25, 2021. Social media companies should design and build stronger proactive measures, make them more accessible and user-friendly, and educate users about them.

I wasn’t prepared emotionally for the abuse I saw on my screen and, as a freelancer, received little support from publications. Now I sometimes avoid reporting on certain topics, or I publish pieces, but I just won’t post on social media because I am afraid of the blowback and would rather not deal with it. If I had additional tools to deal with abuse on social media, I would definitely use them. I could cover some topics and post them.

Jasmine Bager, journalist

Safety modes and visibility snapshots: Making it easier to control privacy and security

The challenge: Writers and journalists are especially vulnerable to hacking, impersonation, and other forms of abuse predicated on accessing or exposing private information.47Jeremy Wagstaff, “Journalists, media under attack from hackers: Google researchers,” Reuters, March 28, 2014, reuters.com/article/us-media-cybercrime/journalists-media-under-attack-from-hackers-google-researchers-idUSBREA2R0EU20140328; Reporters Committee for Freedom of the Press, “The dangers of journalism include getting doxxed. Here’s what you can do about it,” Poynter Institute, May 19, 2015, poynter.org/reporting-editing/2015/the-dangers-of-journalism-include-getting-doxxed-heres-what-you-can-do-about-it/ To reduce risks like these, users need to be able to easily fine-tune the privacy and security settings on their social media accounts, especially because platforms’ default settings often maximize the public visibility of content.48“Twitter is public by default, and the overwhelming majority of people have public Twitter accounts. Geolocation is off by default.” Email to PEN America from Twitter spokesperson, October 2020; Matthew Keys, “A brief history of Facebook’s ever-changing privacy settings,” Medium, March 21, 2018, medium.com/@matthewkeys/a-brief-history-of-facebooks-ever-changing-privacy-settings-8167dadd3bd0

Some platforms have gradually given users more granular control over their settings, which is a positive trend.49Matthew Keys, “A brief history of Facebook’s ever-changing privacy settings,” Medium, March 21, 2018, medium.com/@matthewkeys/a-brief-history-of-facebooks-ever-changing-privacy-settings-8167dadd3bd0 Providing users with maximum choice and control without overwhelming them is a difficult balancing act.50Kat Lo, interview with PEN America, May 19, 2020; Caroline Sinders, Vandinika Shukla, and Elyse Voegeli. “Trust Through Trickery,” Commonplace, PubPub, January 5, 2021, doi.org/10.21428/6ffd8432.af33f9c9 The usability of these tools is just as important as their sophistication. “Every year —like clockwork —Facebook has responded to criticisms of lackluster security and data exposure by rolling out ‘improvements’ to its privacy offerings,” writes journalist Matthew Keys. “More often than not, Facebook heralds the changes as enabling users to take better control of their data. In reality, the changes lead to confusion and frustration.”51Matthew Keys, “A Brief History of Facebook’s Ever-Changing Privacy Settings,” Medium, March 21, 2018, medium.com/@matthewkeys/a-brief-history-of-facebooks-ever-changing-privacy-settings-8167dadd3bd0

Adding to the problem, there is no consistency across platforms in how privacy and security settings work or the language used to describe them. These settings are often buried within apps or separate help centers and are time-consuming and challenging to find and adjust.52Caroline Sinders, Vandinika Shukla, and Elyse Voegeli. “Trust Through Trickery,” Commonplace, PubPub, January 5, 2021, doi.org/10.21428/6ffd8432.af33f9c9; Michelle Madejski, Maritza Johnson, Steven M Bellovin, “The Failure of Online Social Network Privacy Settings,” Columbia University Computer Science Technical Reports (July 8, 2011), doi.org/10.7916/D8NG4ZJ1 Even “Google’s own engineers,” according to Ars Technica, have been “confused” by its privacy settings.53Kate Cox, “Unredacted Suit Shows Google’s Own Engineers Confused by Privacy Settings,” ArsTechnica, August 25, 2020, arstechnica.com/tech-policy/2020/08/unredacted-suit-shows-googles-own-engineers-confused-by-privacy-settings/

While many writers and journalists want to maximize their visibility and user engagement, if they find themselves in the midst of an onslaught of abuse—or anticipate one—they need to quickly and easily reduce their visibility until the trouble has passed. Because tightening privacy has real trade-offs, understanding the implications of adjusting specific settings is critically important. As journalist Jareen Imam points out, when users find it “confusing to see what is public and what is not,” they struggle to weigh trade-offs and make informed choices.54Jareen Imam, interview with PEN America, August 25, 2020.

Existing features and tools: As some platforms add increasingly granular choices for adjusting settings, they are also experimenting with features to streamline the process. With Twitter’s “protect my tweets” and Instagram’s “private account” features, users can now tighten their privacy with a single click, restricting who can see their content or follow them. But they cannot then customize settings within these privacy modes to maintain at least some visibility and reach.55“About Public and Protected Tweets,” Twitter, accessed September 2020, help.twitter.com/en/safety-and-security/public-and-protected-tweets; “How do I set my Instagram account to private so that only approved followers can see what I share?,” Instagram Help Center, accessed February 19, 2021, facebook.com/help/instagram/426700567389543/?helpref=hc_fnav&bc[0]=Instagram%20Help&bc[1]=Privacy%20and%20Safety%20Center

Facebook’s settings are notoriously complicated, and users don’t have a one-click option to tighten privacy and security throughout an account.56In India, Facebook introduced a “Profile Picture Guard” feature in 2017 and seems to be experimenting with a new feature that allows users to “Lock my profile,” which means “people they are not friends with will no longer be able to see photos and posts — both historic and new — and zoom into, share and download profile pictures and cover photos.” However, this feature does not yet appear to be available in multiple countries. Manish Singh, “Facebook rolls out feature to help women in India easily lock their accounts,” TechCrunch, May 21, 2020, tcrn.ch/3uDiREJ Its users can proactively choose to limit the visibility of individual posts,57Justin Lafferty, “How to Control who sees your Facebook posts,” Adweek, March 22, 2013, adweek.com/digital/how-to-control-who-sees-your-facebook-posts/ but they cannot make certain types of content, such as a profile photo, private,58“How Do I Add or Change My Facebook Profile Picture?,” Facebook Help Center, accessed January 19, 2021, facebook.com/help/163248423739693?helpref=faq_content; “Who Can See My Facebook Profile Picture and Cover Photo?,” Facebook Help Center, accessed January 19, 2021, facebook.com/help/193629617349922?helpref=related&ref=related&source_cms_id=756130824560105 which can result in the misuse of profile photos for impersonation or non-consensual intimate imagery.59Woodrow Hartzog and Evan Selinger, “Facebook’s Failure to End ‘Public by Default’,” Medium, November 7, 2018, medium.com/s/story/facebooks-failure-to-end-public-by-default-272340ec0c07 The platform does offer user-friendly, interactive privacy and security checkups.60Germain, T., “How to Use Facebook Privacy Settings”, Consumer Reports, October 7, 2020, consumerreports.org/privacy/facebook-privacy-settings/; Matthew Keys, “A Brief History of Facebook’s Ever-Changing Privacy Settings,” Medium, March 21, 2018, medium.com/@matthewkeys/a-brief-history-of-facebooks-ever-changing-privacy-settings-8167dadd3bd0; “Safety Center,” Facebook, accessed December, 2020, facebook.com/safety

In trying to comprehend byzantine settings, some users have turned to external sources. Media outlets and nonprofits, including PEN America, offer writers and journalists training and guidance on tightening privacy and security on social media platforms.61PEN America and Freedom of the Press Foundation offer hands-on social media privacy and security training. See also: “Online Harassment Field Manual,” PEN America, accessed November 16th, 2020, onlineharassmentfieldmanual.pen.org/; Viktorya Vilk, “What to do if you’re the target of online harassment,” Slate, June 3, 2020, slate.com/technology/2020/06/what-to-do-online-harassment.html; Kozinski Kristen and Neena Kapur, “How to Dox Yourself on the Internet,” The New York Times, February 27, 2020, open.nytimes.com/how-to-dox-yourself-on-the-internet-d2892b4c5954 Third-party tools such as Jumbo and Tall Poppy walk users through adjusting settings step by step.62Casey Newton, “Jumbo is a powerful privacy assistant for iOS that cleans up your social profiles,” The Verge, April 9, 2019, theverge.com/2019/4/9/18300775/jumbo-privacy-app-twitter-facebook; “Product: Personal digital safety for everyone at work,” Tall Poppy, accessed September, 2020, tallpoppy.com/product/ While external tools and training are useful and badly needed, few writers, journalists, and publishers currently have the resources or awareness to take advantage of them.63Jennifer R. Henrichsen et al., “Building Digital Safety For Journalism: A survey of selected issues,” UNESCO, 2015, unesdoc.unesco.org/ark:/48223/pf0000232358 (“Digital security training programs for human rights defenders and journalists are increasing. However, approximately 54 percent of 167 respondents to the survey for this report said they had not received digital security training.”) Moreover, the very existence of such tools and training is indicative of the difficulty of navigating privacy and security within the platforms themselves.

Recommendations: Platforms should provide users with robust, intuitive, user-friendly tools to control their privacy and security settings. Specifically, platforms should:

  • Empower users to create and save “safety modes”—multiple, distinct configurations of privacy and security settings that they can then quickly activate with one click when needed.
    • Twitter and Instagram should give users the option to fine-tune existing safety modes (“protect my tweets” and “private account,” respectively) after users activate them. These modes are currently limited in functionality because they are binary (i.e., the account is either private or not).
    • Facebook should introduce a safety mode that allows users to go private with just one click, as Twitter and Instagram have already done, while also ensuring that users can then fine-tune specific settings in the new safety mode.
  • Introduce “visibility snapshots” that clearly communicate to users, in real time, the implications of the changes they are making as they adjust their security and privacy settings. One solution is to provide users with a snapshot of what is publicly visible, as Facebook does with its “view as” feature.64“How can I see what my profile looks like to people on Facebook I’m not friends with?,” Facebook Help Center, accessed February 19, 2021, facebook.com/help/288066747875915 Another is to provide an estimate of how many or which types of users (followers, public, etc.) will be able to see a post depending on selected settings.
    • Twitter and Instagram should add user-friendly, interactive privacy and security checkups, as Facebook has already done, and introduce visibility snapshots.
    • Facebook should enable users to make profile photos private.
  • Regularly prompt users, via nudges and reminders, to review their security and privacy settings and set up the safety modes detailed above. Prompts could proactively encourage users to reconsider including private information that could put them at risk (such as a date of birth or home address).
  • Convene a multi-stakeholder coalition of technology companies, civil society organizations, and vulnerable users—or deploy a specific existing coalition such as the Global Network Initiative,65“Global Network Initiative,” Global Network Initiative, Freedom of Expression and Privacy, July 26, 2020, globalnetworkinitiative.org/ Online Abuse Coalition,66“Coalition on Online Abuse,” International Women’s Media Foundation, iwmf.org/coalition-on-online-abuse/ or Trust & Safety Professional Association67“Overview,” Trust and Safety Professional Association, 2021, tspa.info/to coordinate consistent user experiences and terminology for account security and privacy across platforms.

My Instagram is probably the most personal account that I have. And actually, for a long time, it was a private account. But because of the pandemic, it’s impossible to do reporting without having this public. When you have a private account, you also close yourself off to sources that might want to reach out to tell you something really important … So it’s public now.

Jareen Imam, director of social newsgathering at NBC News

Identities: Distinguishing between the personal and the professional

The challenge:  For many writers and journalists, having a presence on social media is a professional necessity.68“2017 Global Social Journalism Study,” Cision, accessed February 19, 2021, cision.com/content/dam/cision/Resources/white-papers/SJS_Interactive_Final2.pdf Yet the boundaries between the personal and professional use of social media accounts are often blurred. The importance of engaging with an audience and building a brand encourages the conflation of the professional with the personal.69Cara Brems et al., “Personal Branding on Twitter How Employed and Freelance Journalists Stage Themselves on Social Media,” Digital Journalism 5, no. 4 (May 3, 2016), tandfonline.com/doi/full/10.1080/21670811.2016.1176534?scroll=top&needAccess=true As journalist Allegra Hobbs wrote in The Guardian: “All the things that invite derision for influencers—self-promotion, fishing for likes, posting about the minutiae of your life for relatability points—are also integral to the career of a writer online.”70Allegra Hobbs, “The journalist as influencer: how we sell ourselves on social media,” The Guardian, October 21, 2019, theguardian.com/media/2019/oct/20/caroline-calloway-writers-journalists-social-media-influencers A 2017 analysis of how journalists use Twitter found that they “particularly struggle with” when to be “personal or professional, how to balance broadcasting their message with engagement and how to promote themselves strategically.”71Cara Brems et al., “Personal Branding on Twitter How Employed and Freelance Journalists Stage Themselves on Social Media,” Digital Journalism 5, no. 4 (May 3, 2016), tandfonline.com/doi/full/10.1080/21670811.2016.1176534 While writers and reporters may be mindful of the need for privacy, the challenge, as freelance journalist Eileen Truax explains, is that maintaining a social media presence paves the way for professional opportunities: “Many of the invitations I get to participate in projects come to me because they see my activity on Twitter.”72Eileen Truax, interview with PEN America, May 25, 2020.

This fusion of the personal and professional makes writers and journalists vulnerable. Private information found on social media platforms is weaponized to humiliate, discredit, and intimidate users, their friends, and their families. To mitigate risk, Jason Reich, vice president for corporate security at The New York Times, advises journalists to create distinct personal and professional accounts on social media wherever possible, fine-tune privacy and security settings accordingly, and adjust the information they include for each account.73Jason Reich, interview with PEN America, June 9, 2020. But following such procedures is challenging because platforms make it difficult to distinguish between personal and professional accounts, to migrate or share audiences between them, and to target specific audiences. While users can theoretically create and manage multiple accounts on most platforms, in practice a user who decides to create a professional account or page separate from an existing personal one has to start over to rebuild an audience.74Avery E Holton, Logan Molyneux, “Identity Lost? The personal impact of brand journalism,” SAGE 18, no. 2 (November 3, 2015): 195-210, doi.org/10.1177%2F1464884915608816

The COVID-19 pandemic—which has pushed more creative and media professionals into remote and fully digital work—has intensified this dilemma.75Bernard Marr, “How The COVID-19 Pandemic Is Fast-Tracking Digital Transformation In Companies,” Forbes, May 17, 2020, forbes.com/sites/bernardmarr/2020/03/17/how-the-covid-19-pandemic-is-fast-tracking-digital-transformation-in-companies/?sh=592af355a8ee; Max Willens, “‘But I’m still on deadline’: How remote work is affecting newsrooms,” Digiday, March 17, 2020, digiday.com/media/im-still-deadline-work-home-policies-affecting-newsrooms/ “My Instagram is probably the most personal account that I have,” says Jareen Imam, director of social newsgathering at NBC News. “And actually, for a long time, it was a private account. But because of the pandemic, it’s impossible to do reporting without having this public. When you have a private account, you also close yourself off to sources that might want to reach out to tell you something really important.… So it’s public now.”76Jareen Imam, interview with PEN America, August 25, 2020.

Existing features and tools: Twitter77“How to Manage Multiple Accounts,” Twitter, accessed September 28, 2020, help.twitter.com/en/managing-your-account/managing-multiple-twitter-accounts and Instagram78Gannon Burgett, “How to Manage Multiple Instagram Accounts,” Digital Trends, May 17, 2019, digitaltrends.com/social-media/how-to-manage-multiple-instagram-accounts/ allow individual users to create multiple accounts and toggle easily between them. Facebook, on the other hand, does not allow one user to create more than one account79“Can I Create Multiple Facebook Accounts?,” Facebook Help Center, accessed January 19, 2021, facebook.com/help/975828035803295/?ref=u2u; “Can I Create a Joint Facebook Account or Share a Facebook Account with Someone Else?,” Facebook Help Center, accessed January 19, 2021, facebook.com/help/149037205222530?helpref=related&ref=related&source_cms_id=975828035803295 and requires the use of an “authentic name”—that is, the real name that a user is known by offline.80“What names are allowed on Facebook?” Facebook Help Center, accessed February 8, 2021, facebook.com/help/112146705538576 While Facebook enables users to create “fan pages” and “public figure pages,”81“Can I Create Multiple Facebook Accounts?,” Facebook, accessed August 2020, facebook.com/help/975828035803295?helpref=uf_permalink these have real limitations: By prioritizing the posts of friends and family in users’ feeds, Facebook favors the personal over the professional, curbing the reach of public-facing pages and creating an incentive to invest in personal profiles.82“Generally, posts from personal profiles will reach more people because we prioritize friends and family content and because posts with robust discussion also get prioritized—posts from Pages or public figures very broadly get less reach than posts from profiles.” Email response from Facebook spokesperson, January 21, 2021; Adam Mosseri, “Facebook for Business,” Facebook, January 11, 2018, facebook.com/business/news/news-feed-fyi-bringing-people-closer-together; Mike Isaac, “Facebook Overhauls News Feed to focus on what Friends and Family Share,” The New York Times, January 11, 2018, nyti.ms/3sm6b3Z

All of the platforms analyzed in this paper are gradually giving users more control over their audience. Twitter is testing a feature that allow users to specify who can reply to their tweets83Suzanne Xie, “Testing, testing… new conversation settings,” Twitter, May 20, 2020, blog.twitter.com/en_us/topics/product/2020/testing-new-conversation-settings.html#:~:text=Before%20you%20Tweet,%20you’ll,or%20only%20people%20you%20mention.&text=People%20who%20can’t%20reply,Comment,%20and%20like%20these%20Tweets. and recently launched a feature that allows users to hide specific replies.84Brittany Roston, “Twitter finally adds the option to publicly hide tweets,” SlashGear, November 21, 2019, slashgear.com/twitter-finally-adds-the-option-to-publicly-hide-tweets-21601045/#:~:text=To%20hide%20a%20tweet,%20tap,who%20shared%20the%20hidden%20tweet. Facebook gives users more control over the visibility of individual posts, allowing users to choose among “public,” “friends,” or “specific friends.”85What audiences can I choose from when I share on Facebook?,” Facebook, accessed November 30, 2020, facebook.com/help/211513702214269 Instagram has a feature that lets users create customized groups of “close friends” and share stories in a more targeted way, though it has not yet expanded that feature to posts.86Arielle Pardes, “Instagram Now Lets You Share Pics with Just ‘Close Friends’,” Wired, November 30, 2018, wired.com/story/instagram-close-friends/ But none of these platforms allow individual users to share or migrate friends and followers among multiple accounts or between profiles and pages.87Email response from Facebook spokesperson, January 21, 2021; Email response from Instagram spokesperson, January 15, 2021; Email response from Twitter spokesperson, October 30, 2020.

Recommendations: Platforms should make it easier for users to create and maintain boundaries between their personal and professional identities online while retaining the audiences that they have cultivated. There are multiple ways to achieve this:

  • Empower users to create and save “safety modes”—multiple, distinct configurations of privacy and security settings that they can then quickly activate with one click when needed.
  • Give users greater control over who can see their individual posts (i.e., friends/followers versus subsets of friends/followers versus the wider public), which is predicated on the ability to group audiences and target individual posts to subsets of audiences. This is distinct from giving users the ability to go private across an entire account (see “Safety modes,” above).
    • Like Twitter and Instagram, Facebook should make it possible for users to create multiple accounts and toggle easily between them, a fundamental, urgently needed shift from its current “one identity” approach. Facebook should also ensure that public figure and fan pages offer audience engagement and reach that are comparable to those of personal profiles.
    • Like Facebook, Twitter and Instagram should make it easier for users to specify who can see their posts and allow users to migrate or share audiences between personal and professional online identities.


Mitigating risk: While most individual users are entitled to exert control over who can see and interact with their content, for public officials and entities on social media, transparency and accountability are paramount. The courts recently asserted, for example, that during Donald Trump’s presidency, it was unconstitutional for him to block users on Twitter because he was using his “presidential account” for official communications.88Knight First Amendment Inst. at Columbia Univ. v. Trump, No. 1:17-cv-5205 (S.D.N.Y. 2018) There are multiple related cases currently winding their way through the courts, including the ACLU’s lawsuit against state Senator Roy Scott of Colorado for blocking a constituent on Twitter.89“ACLU Sues Colorado State Senator for Blocking Constituent on Social Media,” ACLU of Colorado, June 11, 2019, aclu-co.org/aclu-sues-colorado-state-senator-for-blocking-constituent-on-social-media/ It is especially important that public officials and entities be required to uphold the boundaries between the personal and professional use of social media accounts and ensure that any accounts used to communicate professionally remain open to all constituents. Public officials and entities must also adhere to all relevant laws for record keeping in official, public communications, including on social media.

Fine-tuning privacy and security settings on social media is critical to reducing the risk of hacking, impersonation, doxing, and other forms of abuse predicated on accessing or exposing private information.
Photos by Bronney Hui

Account histories: Managing old content

The challenge: Many writers and journalists have been on social media for over a decade.90Ruth A. Harper, “The Social Media Revolution: Exploring the Impact on Journalism and News Media Organizations,” Inquiries Journal 2, no. 3, (2010), inquiriesjournal.com/articles/202/the-social-media-revolution-exploring-the-impact-on-journalism-and-news-media-organizations They joined in the early days, when platforms like Facebook were used primarily in personal life and privacy settings often defaulted to “public” and were not granular or easily accessible.91Matthew Keys, “A brief history of Facebook’s ever-changing privacy settings,” Medium, March 21, 2018, medium.com/@matthewkeys/a-brief-history-of-facebooks-ever-changing-privacy-settings-8167dadd3bd0 But the ways that creative and media professionals use these platforms has since broadened in scale, scope, and reach. Writers’ and journalists’ long histories of online activity can be mined for old posts that, when resurfaced and taken out of context, can be deployed to try to shame a target or get them reprimanded or fired.92Kenneth P. Vogel, Jeremy W. Peters, “Trump Allies Target Journalists Over Coverage Deemed Hostile to White House,” The New York Times, August 25, 2019, nytimes.com/2019/08/25/us/politics/trump-allies-news-media.html; Aja Romano, “The ‘controversy’ over journalist Sarah Jeong joining the New York Times, explained,” Vox, August 3, 2018, vox.com/2018/8/3/17644704/sarah-jeong-new-york-times-tweets-backlash-racism

Existing features and tools: On Twitter and Instagram, users can delete content piecemeal and cannot easily search through or sort old content, which is cumbersome and impractical.93Abby Ohlheiser, “There’s no good reason to keep old tweets online. Here’s how to delete them,” The Washington Post, July 30, 2018, wapo.st/3o01WHJ; David Nield, “How to Clean Up Your Old Social Media Posts,” Wired, June 14, 2020, wired.com/story/delete-old-twitter-facebook-instagram-posts/ In June 2020, Facebook launched “manage activity,” a feature that allows users to filter and review old posts by date or in relation to a particular person and to archive or delete posts individually or in bulk.94“Introducing Manage Activity,” Facebook, June 2, 2020, about.fb.com/news/2020/06/introducing-manage-activity/ Manage activity is an important and useful new feature, but it does not allow users to search by keywords and remains difficult to find. There are multiple third-party tools that allow users to search through and delete old tweets and posts en masse;95In interviews, PEN America journalists and safety experts mentioned Tweetdelete and Tweetdeleter. Additional third-party tools include Semiphemeral, Twitwipe, and Tweeteraser for Twitter and InstaClean for Instagram. however, some of them cost money and most require granting third-party access to sensitive accounts, which poses its own safety risks depending on the cybersecurity and privacy practices (and ethics) of the developers.

Recommendations: Platforms should provide users with integrated and robust features to manage their personal account histories, including the ability to search through old posts, review them, make them private, delete them, and archive them—individually and in bulk. Specifically:

  • Twitter and Instagram should integrate a feature that allows users to search, review, make private, delete, and archive old content—individually and in bulk.
  • Facebook should expand its new manage activity feature to enable users to search by keywords and should make this feature more visible and easier to access.


Mitigating risk:
PEN America believes that users should have control over their own social media account histories. Users already have the ability to delete old content on most platforms and via multiple third-party tools. But giving users the ability to purge account histories, especially in bulk, does have drawbacks. Abusers can delete old posts that would otherwise serve as evidence in cases of harassment, stalking, or other online harms. And by removing old content, public officials and entities using social media accounts in their official capacities may undermine accountability and transparency. There are ways to mitigate these drawbacks. It is vital that people facing online abuse are able to capture evidence of harmful content, which is needed for engaging law enforcement, pursuing legal action, and escalating cases with the platforms. For that reason, this report advocates for a documentation feature that would make it easier for targets to quickly and easily preserve evidence of abuse (see “Documentation,” below). In the case of public officials or entities deleting account histories, tools that archive the internet, such as the Wayback Machine, are critically important resources for investigative journalism.96“Politwoops: Explore the Tweets They Didn’t Want You to See,” Propublica, projects.propublica.org/politwoops/; Valentina De Marval, Bruno Scelza, “Did Bolivia’s Interim President Delete Anti-Indigenous Tweets?,” AFP Fact Check, November 21, 2019, factcheck.afp.com/did-bolivias-interim-president-delete-anti-indigenous-tweets There are also federal laws—most centrally the Freedom of Information Act—and state-level laws that require public officials to retain records that may be disclosed to the public; these records include their statements made on social media.975 U.S.C § 552; “Digital Media Policy,” Department of the Interior, accessed February 19, 2021, doi.gov/sites/doi.gov/files/elips/documents/470_dm_2_digital_media_policy_1.pdf Public officials and entities using social media accounts in their official capacities must adhere to applicable record retention laws, which should apply to social media as they do to other forms of communication.

Other people in my friend circle have to take into account that, being Black and queer, I get more negativity than they would. If they’re white or cisgender or heteronormative, they’ll come back and say, ‘You know what? Jordan is getting a lot of flack, so let’s step up to the plate.’

Jordan, blogger (requested to be identified only by their first name)

Rapid response teams and delegated access: Facilitating allyship

The challenge: Online abuse isolates its targets. A 2018 global study from TrollBusters and the IWMF found that 35 percent of women and nonbinary journalists who had experienced threats or harassment reported “feeling distant or cut off from other people.”98Michelle Ferrier, “Attacks and Harassment: The Impact on Female Journalists and Their Reporting,” TrollBusters and International Women’s Media Foundation, September 13, 2018, iwmf.org/wp-content/uploads/2018/09/Attacks-and-Harassment.pdf Many people targeted by online abuse suffer in silence because of the stigma, shame, and victim blaming surrounding all forms of harassment.99Angie Kennedy, Kristen Prock, “I Still Feel Like I Am Not Normal,” Trauma Violence & Abuse 19, no. 5, (December 2, 2018), researchgate.net/publication/309617025_”I_Still_Feel_Like_I_Am_Not_Normal”_A_Review_of_the_Role_of_Stigma_and_Stigmatization_Among_Female_Survivors_of_Child_Sexual_Abuse_Sexual_Assault_and_Intimate_Partner_Violence Often targets have no choice but to engage with hateful or harassing content—in order to monitor, mute, report, and document it—which can be overwhelming, exhausting, and traumatizing.100Erin Carson, “This is your brain on hate,” CNET, July 8, 2017, cnet.com/news/heres-how-online-hate-affects-your-brain/

Many of the writers and journalists in PEN America’s network emphasize the importance of receiving support from others  in recovering from episodes of online abuse. Jordan, a blogger who requested to be identified only by their first name, explained: “Other people in my friend circle have to take into account that, being Black and queer, I get more negativity than they would. If they’re white or cisgender or heteronormative, they’ll come back and say, ‘You know what? Jordan is getting a lot of flack, so let’s step up to the plate.’”101“Story of Survival: Jordan,” PEN America Online Harassment Field Manual, October 31, 2017, onlineharassmentfieldmanual.pen.org/stories/jordan-blogger-tennessee/

Existing features and tools: Users can help one another report abuse on Twitter,102“Report abusive behavior,” Twitter, accessed October 2020, twitter.com/en/safety-and-security/report-abusive-behavior Facebook,103“How to Report Things,” Facebook, accessed October 2020, facebook.com/help/1380418588640631/?helpref=hc_fnav and Instagram.104“Abuse and Spam,” Instagram, accessed October 2020, instagram.com/215140222006271#:~:text=If%20you%20have%20an%20Instagram,Guidelines%20from%20within%20the%20app But for allies to offer more extensive support—such as blocking or checking direct messages (DMs) on a target’s behalf—they need to have access to the target’s account. In-platform features that securely facilitate allyship are rare, and those that exist were not specifically designed for this purpose. As a result, many targets of online abuse either struggle on their own or resort to ad hoc strategies, such as handing over passwords to allies,105Jillian C. York, “For Bloggers at Risk: Creating a Contingency Plan,” Electronic Frontier Foundation, December 21, 2011, eff.org/deeplinks/2011/12/creating-contingency-plan-risk-bloggers which undermines their cybersecurity at precisely the moment when they are most vulnerable to attacks.

On Facebook, the owner of a public page can grant other users “admin” privileges, but this feature is not available for personal Facebook profiles.106“How do I manage roles for my Facebook page?,” Facebook, accessed October 2020, facebook.com/help/187316341316631 Similarly, Instagram allows users to share access and designate “roles,” but only on business accounts.107“Manage Roles on a Shared Instagram Account,” Instagram, accessed December 2020, help.instagram.com/218638451837962?helpref=related Twitter comes closest to supporting delegated access with its “teams” feature in TweetDeck, letting users share access to a single account without using the same password and be granted owner, admin, or contributor status.108“How to use the Teams feature on Tweetdeck,” Twitter, accessed October 2020, help.twitter.com/en/using-twitter/tweetdeck-teams

While useful, these features were designed to facilitate professional productivity and collaboration and meant primarily for institutional accounts or pages.109Sarah Perez, “Twitter enables account sharing in its mobile app, powered by Tweetdeck Teams,” TechCrunch, September 8, 2017, techcrunch.com/2017/09/08/twitter-enables-account-sharing-in-its-mobile-app-powered-by-tweetdeck-teams/ (“This change will make it easier for those who run social media accounts for businesses and brands to post updates, check replies, send direct messages and more, without having to run a separate app.”) The reality is that, like many users, writers and journalists use social media accounts for both personal and professional purposes and need integrated support mechanisms designed specifically to deal with online abuse. Facebook offers a feature that enables users to proactively select a limited number of trusted friends to help them if they get locked out of their account.110“How can I contact the friends I’ve chosen as trusted contacts to get back into my Facebook account?,” Facebook, accessed October 2020, facebook.com/help/213343062033160 If the company adapted this feature to allow users to proactively select several trusted friends to serve as a rapid response team during episodes of abuse—and added this feature to its new “registration for journalists”111“Register as a Journalist with Facebook,” Facebook Business Help Center, accessed January 20, 2021, facebook.com/business/help/620369758565492?id=1843027572514562—it could serve as an example for other platforms.

In PEN America’s trainings and resources, we advise writers and journalists to proactively designate a rapid response team—a small network of trusted allies—who can be called upon to rally broader support and provide specific assistance, such as account monitoring or temporary housing in the event of doxing or threats.112“Deploying Supportive Cyber Communities,” PEN America Online Harassment Field Manual, accessed February 2021, onlineharassmentfieldmanual.pen.org/deploying-supportive-cyber-communities/ Several third-party tools and networks are trying to fill the glaring gap in peer support. As Lu Ortiz, founder and executive director of the anti-harassment nonprofit Vita Activa, explains: “Peer support groups are revolutionary because they destigmatize the process of asking for help, provide solidarity, and generate resilience and strategic decision making.”113Lu Ortiz, email to PEN America, January 21, 2021 The anti-harassment nonprofit TrollBusters coordinates informal, organic support networks for journalists.114Michelle Ferrier, interview with PEN America, February 12, 2021. The anti-harassment nonprofit Hollaback! has developed a platform called HeartMob that provides the targets of abuse with support and resources from a community of volunteers.115“About HeartMob,” HeartMob, accessed December 2020, iheartmob.org/about “Our goal,” says co-founder Emily May, “is to reduce trauma for people being harassed online by giving them the immediate support they need.”116Emily May, email to PEN America, January 25, 2021. Block Party, a tool currently in beta for Twitter, gives users the ability to assign “helpers” to assist with monitoring, muting, or blocking abuse.117“Frequently asked questions,” Block Party, accessed October 2020, blockpartyapp.com/faq/#what-does-a-helper-do (“When you add a Helper, you can set their permissions to be able to view only, flag accounts, or even mute and block on your behalf. Mute and block actions apply directly to your Twitter account, but Helpers can’t post tweets from your Twitter account nor can they access or send direct messages.”) Squadbox lets users designate a “squad” of supporters to directly receive and manage abusive content in email in-boxes.118Katilin Mahar, Amy X. Zhang, David Karger, “Squadbox: A Tool to Combat Email Harassment Using Friendsourced Moderation,” CHI 2018: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, no. 586 (April 21, 2018): 1-13, doi/10.1145/3173574.3174160; Haystack Group, accessed October 2020, haystack.csail.mit.edu/ These tools and communities provide models for how platforms could integrate peer support.

In PEN America’s trainings and resources, we advise writers and journalists to proactively designate a rapid response team—a small network of trusted allies—who can be called upon to rally broader support and provide specific assistance, such as account monitoring or temporary housing in the event of doxing or threats.119“Deploying Supportive Cyber Communities,” PEN America Online Harassment Field Manual, accessed February 2021, onlineharassmentfieldmanual.pen.org/deploying-supportive-cyber-communities/ Several third-party tools and networks are trying to fill the glaring gap in peer support. As Lu Ortiz, founder and executive director of the anti-harassment nonprofit Vita Activa, explains: “Peer support groups are revolutionary because they destigmatize the process of asking for help, provide solidarity, and generate resilience and strategic decision making.”120Lu Ortiz, email to PEN America, January 21, 2021 The anti-harassment nonprofit TrollBusters coordinates informal, organic support networks for journalists.121Michelle Ferrier, interview with PEN America, February 12, 2021. The anti-harassment nonprofit Hollaback! has developed a platform called HeartMob that provides the targets of abuse with support and resources from a community of volunteers.122“About HeartMob,” HeartMob, accessed December 2020, iheartmob.org/about “Our goal,” says co-founder Emily May, “is to reduce trauma for people being harassed online by giving them the immediate support they need.”123Emily May, email to PEN America, January 25, 2021. Block Party, a tool currently in beta for Twitter, gives users the ability to assign “helpers” to assist with monitoring, muting, or blocking abuse.124“Frequently asked questions,” Block Party, accessed October 2020, blockpartyapp.com/faq/#what-does-a-helper-do (“When you add a Helper, you can set their permissions to be able to view only, flag accounts, or even mute and block on your behalf. Mute and block actions apply directly to your Twitter account, but Helpers can’t post tweets from your Twitter account nor can they access or send direct messages.”) Squadbox lets users designate a “squad” of supporters to directly receive and manage abusive content in email in-boxes.125Katilin Mahar, Amy X. Zhang, David Karger, “Squadbox: A Tool to Combat Email Harassment Using Friendsourced Moderation,” CHI 2018: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, no. 586 (April 21, 2018): 1-13, doi/10.1145/3173574.3174160; Haystack Group, accessed October 2020, haystack.csail.mit.edu/ These tools and communities provide models for how platforms could integrate peer support.

Recommendations: Platforms should add new features and integrate third-party tools that facilitate peer support and allyship. Specifically, platforms should:

  • Enable users to proactively designate a limited number of trusted allies to serve as a rapid response teama small network of trusted allies who can be called upon to work together or individually to monitor, report, and document abuse that is publicly visible and rally a broader online community to help.
  • Offer users the ability to grant specific members of their rapid response team access to their accounts, akin to the “delegate” system available on Gmail.126“Set Up Mail Delegation—Gmail Help,” Google, accessed January 5, 2021, support.google.com/mail/answer/138350?hl=en These delegates could assist with tasks that require direct account access, such as blocking, muting, and reporting abuse in DMs. Users should be able to control the level of access their delegates have (to public feeds versus private DMs, for example).
    • Twitter should integrate its “teams” feature, which is currently available only through TweetDeck, more directly into the primary user experience and empower users to specify exactly which anti-harassment features (monitoring, blocking, muting, reporting, etc.) their delegates can access.Instagram should extend its “roles” feature from business accounts to all accounts.Facebook should extend admin privileges from pages to profiles.
  • Periodically nudge users to create rapid response teams and assign delegates, in tandem with security checkups, potentially when a user reaches a certain follower threshold or immediately after the user has reported online abuse.

Shield and dashboard: Treating online abuse like spam

The challenge: Regularly interacting with hate and harassment is harmful, and users need to have greater control over their exposure to it. As Larry Rosen, professor emeritus of psychology at California State University, explained to Consumer Reports: “You’re going to start feeling more negative, maybe depressed, more stressed, more anxious. The advice I’d give is to identify where the negative stuff is coming from and hide it all.”127Thomas Germain, “How to Filter Hate Speech, Hoaxes, and Violent Speech Out of Your Social Feeds,” August 13, 2020, consumerreports.org/social-media/combat-hate-speech-and-misinformation-on-social-media/

Lessons can be learned from the world of email, where managing and metering spam have been a qualified success. In her book The Internet of Garbage, technologist and journalist Sarah Jeong makes the connection between spam and the “garbage” of harassment. “Dealing with garbage is time-consuming and emotionally taxing,” she writes. And while “patterning harassment directly after anti-spam is not the answer,” there are “obvious parallels.”128Sarah Jeong, The Internet of Garbage (Vox Media, Inc., 2018), cdn.vox-cdn.com/uploads/chorus_asset/file/12599893/The_Internet_of_Garbage.0.pdf Taking inspiration from efforts to reduce the volume and visibility of spam, platforms can do more to proactively identify online abuse, filter it, and hide it—from the feeds, notifications, and DMs, of individual users.

Most major social media platforms already rely on a combination of automation and human moderation to proactively identify certain kinds of harmful content in order to reduce its reach, label it, hide it behind screens, or delete it altogether—for all users.129Kat Lo, “Toolkit for Civil Society and Moderation Inventor,” Meedan, November 18, 2020, meedan.com/reports/toolkit-for-civil-society-and-moderation-inventory/. The challenge with online abuse, however, is that it is heavily context-dependent and can often fall into gray areas that both computers and humans have difficulty adjudicating. Shielding individual users from abusive “garbage” and giving them greater control over whether and how they interact with it can provide an alternative to overzealous proactive content moderation, which can severely undermine free expression for all users.

Existing features and tools: The technology to more accurately and effectively identify and filter context-dependent harmful content is currently being built. Twitter,130“How to use advanced muting options,” Twitter, accessed October 2020, help.twitter.com/en/using-twitter/advanced-twitter-mute-options Facebook,131“How do I Mute or Unmute a Story on Facebook?,” Facebook, accessed September 2020, facebook.com/help/408677896295618 and Instagram132Alex Kantrowitz, “Instagram Rolls Out Custom And Default Keyword Filtering To Combat Harassment,” Buzzfeed News, September 12, 2016, buzzfeednews.com/article/alexkantrowitz/instagram-keyword-filtering-to-fight-harassment already allow users to hide, mute, and filter some content in feeds, messages, and notifications, but these features are fairly limited in functionality, largely reactive, and rarely quarantine content.

Tools such as Perspective,133“Perspective API, which uses machine learning to identify toxic language, is used to give feedback to commenters, help moderators more easily review comments, and keep conversations open online.” Email to PEN America from Jigsaw, February 2, 2021; Perspective, accessed October 2020, perspectiveapi.com/#/home Coral,134Coral by Vox Media, accessed December 2020, coralproject.net/ L1ght,135“FAQ,” L1ght, accessed September, 2020, l1ght.com/faq/ and Sentropy136John Redgrave & Taylor Rhyne (Founders, Sentropy), interview with PEN America, June 17, 2020; “Our Mission Is to Protect Digital Communities,” About, Sentropy Technologies, Inc., accessed January, 2021, sentropy.com/about use machine learning to proactively identify and filter harassment at scale. Many newsrooms and other publishers use Coral and Perspective, for example, to automatically identify and quarantine “toxic” content in the comment sections beneath articles, which human content moderators can then evaluate.137Coral by Vox Media, accessed December 2020, coralproject.net/; “About the API,” Perspective, accessed August 2020, support.perspectiveapi.com/s/about-the-api

Third-party tools that help individual users (rather than institutions) filter and hide abuse—among them Tune, Block Party, Sentropy Protect, and BodyGuard—have also emerged in recent years. Jigsaw’s Tune is an experimental web browser extension that aims to use machine learning to allow users to adjust the toxicity level of the content they interact with, including content on Facebook and Twitter.138“Tune (Experimental), Chrome Web Store, accessed September, 2020, chrome.google.com/webstore/detail/tune-experimental/gdfknffdmmjakmlikbpdngpcpbbfhbnp?hl=en#:~:text=Tune%20is%20a%20Chrome%20extension,in%20comments%20across%20the%20internet Block Party, currently available on Twitter with the goal of expanding to other platforms, aims to proactively identify potentially abusive accounts, automatically block or mute them, and silo related content; users can then choose to review the accounts, report them, and/or unblock and unmute them.139Ingrid Lunden, “Sentropy launches tool for people to protect themselves from social media abuse, starting with Twitter,” February 9, 2021, tcrn.ch/3b4KgHO ; “Stanford grad creates Block Party app to filter out Twitter trolls,” ABC7 KGO, January 29, 2021, abc7news.com/block-party-twitter-online-harassment-internet-trolls/10136701/ Block Party founder Tracy Chou experienced egregious online abuse as a woman of color in tech and created the tool to serve as a “spam folder.”140Shannon Bond, “Block Party Aims To Be A ‘Spam Folder’ For Social Media Harassment,” NPR, February 23, 2021, npr.org/2021/02/23/970300911/block-party-aims-to-be-a-spam-folder-for-social-media-harassment She explains: “We need to know what people are saying, we need to collect it, we need to keep an eye on it, but we also need to stop seeing it if we want to preserve ourselves.”141Tracy Chou (Block Party), interview with PEN America, August 18, 2020. Both Sentropy Protect and BodyGuard use machine learning to proactively identify and silo abusive content, which users can then review and address; the latter is available for multiple languages on Twitter, Twitch, Instagram, and YouTube.142“Frequently asked questions,” BodyGuard, accessed October 2020, bodyguard.ai/faq; Matthieu Boutard (BodyGuard), interview with PEN America, June 26, 2020.

Recommendation: Platforms should create a shield that enables users to proactively filter abusive content (across feeds, threads, comments, replies, direct messages, etc.) and quarantine it in a dashboard, where users could then review and address the content as needed, with the help of trusted allies.

How would the shield work?

  • The shield would proactively identify abusive content, filter it out (by hiding it from the the targeted user but not from all users), and automatically quarantine it in the dashboard (see below).
  • Users could turn on the shield with just one click from within the platform’s primary user experience.
  • Users could fine-tune the shield to adjust the toxicity level of content they filter out.
  • Users would receive prompts to turn on or fine-tune the shield when the platform detects unusual account activity.

How would the dashboard work? From within the dashboard, users would be able to:

  • Review quarantined content to block, mute, or report it—and related accounts—including in bulk. Content should be blurred by default, with the option of revealing it for review. Ideally, content should also be labeled with the relevant abusive tactic (hate, slur, threat, doxing, etc.) to help users and their allies prioritize what to review.
  • Manually add abusive content to the dashboard that was missed by the shield.
  • Manually release from the dashboard content that was mistakenly filtered as abusive or that the user does not perceive as abusive.
  • Document the abuse (see “Documentation,” below).
  • Activate rapid response teams to provide peer support, including giving trusted allies delegated access to the dashboard to help manage abuse (see “Rapid response teams and delegated access,” above).
  • Access account privacy and security settings (see “Safety modes,” above).
  • Access account history management tools (see “Managing account histories,” above).
  • Access personalized support through an SOS button and emergency hotline (see “Creating an SOS button and emergency hotline,” below).
  • Access external resources such as mental health hotlines, legal counseling, cybersecurity help, and other direct support.

Mitigating risk: The automated filtering of harmful content is an imperfect science—with false positives, rapidly evolving and coded forms of abuse and hate, and challenges analyzing symbols and images.143Nicolas Kayser-Bril, “Automated moderation tool from Google rates People of Color and gays as ‘toxic,’” Algorithm Watch, algorithmwatch.org/en/story/automated-moderation-perspective-bias/; Ilan Price et al, “The Challenge of Identifying Subtle Forms of Toxicity Online,” Medium, December 12, 2018, medium.com/the-false-positive/the-challenge-of-identifying-subtle-forms-of-toxicity-online-465505b6c4c9 Platforms should work more closely with one another, with companies that build third-party tools, and with civil society to create and maintain a shared taxonomy of abusive tactics, terms, symbols, etc., and to create publicly available data sets and heuristics for independent review.144Michele Banko, Brendon MacKeen, and Laurie Ray, “A Unified Taxonomy for Harmful Content,” Sentropy Technologies, 2020, aclweb.org/anthology/2020.alw-1.16.pdf (This paper from researchers at Sentropy provides a solid foundation for a shared taxonomy.)

Reactive measures: Facilitating response and reducing harm

Platforms currently offer considerably more options for reacting to online abuse than for proactive protection. Most platforms offer some form of blocking, which is intended to cut off further communication between abuser and target. Nearly all of them also offer some form of muting, which hides abuse from its intended target but not from other users. These mechanisms are vitally important, but they are also inconsistent, insufficient, and inherently limited. Blocking, for example, can be taken as a provocation and escalate abuse, while muting can mask serious threats. No platform analyzed in this report provides a mechanism to track or document abusive content. Most platforms enable users to report abuse, but there is widespread consensus that content moderation systems in general—and reporting features specifically—are often ineffective and cannot adequately keep up with the volume of abuse proliferating on platforms.

Documentation: Recording evidence of abuse

The challenge: All users, including writers and journalists, need to have the ability to maintain a detailed, exportable record of the online abuse they’ve been subjected to. Documentation serves as a prerequisite for engaging with law enforcement or pursuing legal action. It can also help targets track online abuse and facilitate communication with allies and employers. The pressing need for a documentation feature is underscored by the fact that targets can lose evidence if, for instance, an abuser deliberately deletes the content as soon as it has been seen or if the content is reported, determined to be a violation of platform policies, and removed.145Anna Goldfarb, “Expert Advice on How to Deal with Online Harassment,” Vice, March 19, 2018, vice.com/en_us/article/bjp8ma/expert-advice-on-how-to-deal-with-online-harassment; “Documenting Online Harassment,” PEN America Online Harassment Field Manual, accessed February 2021, onlineharassmentfieldmanual.pen.org/documenting-online-harassment/

In 2015, the Electronic Frontier Foundation (EFF) advised digital platforms to build “tools that allow targets of harassment to preserve evidence in a way that law enforcement can understand and use. Abuse reports are currently designed for internet companies’ internal processes, not the legal system.”146Danny O’Brien and Dia Kayyali, “Facing the Challenge of Online Harassment,” Electronic Frontier Foundation, January 8, 2015, eff.org/deeplinks/2015/01/facing-challenge-online-harassment According to lawyer Carrie Goldberg, who founded a law firm specializing in defending the targets of online abuse, a documentation feature that provides evidence that could be used in court in the United States should record “screenshots of abusive content, URLs, the social media platform on which abuse occurred, the abuser’s username/handle and basic account info, a time and date stamp, and the target’s username/handle.”147Carrie Goldberg, interview by PEN America, May 26, 2020. Ideally this information would be encrypted where sensible to protect the privacy of all parties, as well as to protect the integrity of the records for use in court.

A documentation feature needs to have a simple interface that is easily navigable by non-expert users who may be traumatized or in duress. “Imagine if you could select [abusive] mentions and just put them in a report,” says technologist and researcher Caroline Sinders. “Journalists could easily capture all instances of harassment and forward them to an editor or trusted colleague for advice.”148Caroline Sinders, interview by PEN America, June 4, 2020. Users need to be able to capture all publicly available data for abusive content with just one click. Better still, a documentation feature could automatically record all key data for any content proactively detected by toxicity filters (see “Building a shield and dashboard,” above).

Existing features and tools: Most social media platforms do not offer any tools that facilitate the documentation of online abuse. Twitter comes closest. When users report abusive tweets, they can request that Twitter email them some key information, but users have to proactively report abuse and proactively request this information.149Kaofeng Lee, Ian Harris, “How to Gather Technology Abuse Evidence for Court,” National Council of Juvenile and Family Court Judges, accessed August 2020, ncjfcj.org/wp-content/uploads/2018/02/NCJFCJ_SRL_HowToGatherTechEvidence_Final.pdf (“When you report an abusive tweet to Twitter, you have the option to ask Twitter to email you a report. This report includes: the threatening tweet, the username of the person who tweeted, date and time of the tweet, your account information, and the date and time of your report.”) In other words, on Facebook, Instagram, and Twitter, users must manually take screenshots, save links, track metadata, and create logs of their abuse, all of which can be time-consuming and retraumatizing.

As for existing third-party tools, PEN America was able to identify only two that facilitate documentation and are specifically designed for abuse: JSafe, an app in beta developed by the Reynolds Journalism Institute with the Coalition for Women in Journalism, and DocuSafe, from the National Network to End Domestic Violence. Both of these apps still require users to manually track and enter data, but they offer a single place to store and organize it.150“JSafe: Free mobile application for women journalists to report threats,” Women in Journalism, accessed October 2020, womeninjournalism.org/jsafe; “DocuSAFE: Documentation and Evidence Collection App,” Technology Safety, .techsafety.org/docusafe Google’s Jigsaw team informed PEN America that they are “experimenting in the space of documentation and reporting to help targets of online harassment manage and take action on the harassment they receive … building on our experience developing Perspective API, which uses machine learning to identify toxic language.”151Email response from Adesola Sanusi, product manager at Jigsaw, February 22, 2021.

Recommendations: Platforms should develop a documentation feature to empower users and their allies to quickly and easily record evidence of abusive content—capturing screenshots, hyperlinks, and other publicly available data automatically or with one click. This feature is especially important for content that is threatening or heightens the risk of physical violence, such as doxing. Specifically, platforms should:

  • Enable users to automatically capture all relevant publicly available data for content that is flagged as abusive via user muting, restricting, blocking, or reporting, as well as proactive content filtering. To ensure that the feature meets evidentiary requirements for legal proceedings, platforms need to work with civil society, law enforcement, and legal experts to create documentation standards in each country and jurisdiction where they operate.
  • Enable users to manually document abusive content with one click and to manually add additional data, including context and relevant hashtags, to supplement automatic documentation.
  • Enable users to download or export documented abuse so it can then be shared with third parties such as nonprofit organizations, employers, support networks, and legal counsel.
  • Ensure that the documentation feature, which will require at least some engagement with abusive content, is user-friendly and designed using trauma-informed, ethical frameworks.

Muting, blocking, and restricting: Improving and standardizing existing features

The challenge: Users targeted by online abuse, including writers and journalists, need to have the ability to limit contact with an abuser and to control or limit exposure to abusive content. While most platforms are building increasingly sophisticated features for these purposes, there is no consistency across platforms in terms of language, functionality, or the mitigation of potential unintended consequences.152Kat Lo, “Toolkit for Civil Society and Moderation Inventor,” Meedan, November 18, 2020, meedan.com/reports/toolkit-for-civil-society-and-moderation-inventory/ (“Different social media platforms use different terms to describe similar or identical moderation features, and conversely, use the same terms to describe moderation features that are implemented differently across platforms.”) Each feature works somewhat differently on each platform, and not every platform offers every feature, making it confusing and time-consuming for users to understand the reactive measures available to them.

Screenshot demonstrating how to restrict an abusive account on Instagram. Photo by Instagram

Existing features and tools:

Blocking allows users to cut off contact and communication with abusers.153Kat Lo, “Toolkit for Civil Society and Moderation Inventor,” Meedan, November 18, 2020, meedan.com/reports/toolkit-for-civil-society-and-moderation-inventory/ But as Pulsar, an audience intelligence company, explained in a 2018 audit of blocking features: “Blocking remains a highly inconsistent experience on different social platforms.”154Victoria Gray, “The Most Confusing and Necessary Social Media Feature: The State of Blocking in 2018,” Pulsar, June 5, 2018, pulsarplatform.com/blog/2018/state-blocking-social-media-twitter-facebook-instagram-snapchat-whatsapp-2018 On Facebook, targets have to block the abuser on messenger and then separately block the abuser’s profile in order to ensure that their public-facing content is no longer visible to the blocked abuser.155“What Is Blocking on Facebook and How Do I Block Someone?,” Facebook Help Center, accessed March 2021, facebook.com/help/168009843260943?helpref=faq_content; “What happens when I block messages from someone while I’m using Facebook?,” Facebook Help Center, accessed November 2020, facebook.com/help/389645087895231 On Twitter, a blocked abuser cannot see any information on the target’s profile (except their profile photo).156Victoria Gray, “The Most Confusing and Necessary Social Media Feature: The State of Blocking in 2018,” Pulsar, June 5, 2018, pulsarplatform.com/blog/2018/state-blocking-social-media-twitter-facebook-instagram-snapchat-whatsapp-2018/ On Instagram, most of the target’s account (except their name, profile photo, mutual followers, and number of posts) disappears from the blocked abuser’s view.157Mehvish Mushtaq, “What Happens When You Block Someone on Instagram,” Guiding Tech, April 4, 2019, guidingtech.com/what-happens-block-on-instagram/ Instagram offers the most robust blocking features, retroactively removing the comments and likes from blocked accounts and enabling users to select up to 25 comments and then block, en masse, all the accounts posting those comments.158Jeff Yeung, “Instagram Brings in New Features to Curb Spam and Offensive Comments,” Hypebeast, May 13, 2020, hypebeast.com/2020/5/instagram-spam-comments-tags-pins-feature; “How can I manage multiple comments on my Instagram posts?” Instagram Help Center, accessed December 7, 2020 facebook.com/help/instagram/289376615404536

Muting allows users to make abusive content invisible, but only from their individual perspective (not from all users). The muted material can be a specific piece of content, a specific user, a keyword, or notifications.159Lo, Kat, “Toolkit for Civil Society and Moderation Inventor,” Meedan, November 18, 2020, meedan.com/reports/toolkit-for-civil-society-and-moderation-inventory/ Like blocking, muting works differently on each platform, and the options can get granular. Furthermore, different platforms use different terms to refer to the act of hiding content. On Twitter, users can “mute” entire accounts, individual tweets, and replies to their tweets, and they can “mute” content by keyword, emoji, or hashtag. But they cannot mute DMs, only the notifications announcing them, and there is no expiration date for muting.160“How to Mute Accounts on Twitter,” Twitter Help Center, accessed August 2020, help.twitter.com/en/using-twitter/twitter-mute; “How to Use Advanced Muting Options,” Twitter Help Center, accessed August 2020, help.twitter.com/en/using-twitter/advanced-twitter-mute-options On Facebook, there’s no equivalent to muting, but users can “snooze” accounts or groups for 30 days, “mute” other users’ stories, and permanently “unfollow” posts without unfriending accounts.161“How do I Mute or Unmute a Story on Facebook?,” Facebook Help Center, accessed September 2020, facebook.com/help/408677896295618; “How do I Unfollow a Person, Page or Group on Facebook?,” Facebook Help Center, accessed August 2020, facebook.com/help/190078864497547 Facebook users can also “block” comments by keywords and “filter for profanity,” but only on pages,162Email from Facebook spokesperson, January 21, 2021; “Moderate Your Facebook Page,” Facebook for Media, accessed January 12, 2021, facebook.com/formedia/blog/moderating-your-facebook-page not on profiles, and the platform’s muting-like features only partly shield targets from abuse in DMs.163“How do I Turn Comment Ranking On or Off for My Facebook Page or Profile?,” Facebook Help Center, accessed February 9, 2021, facebook.com/help/1494019237530934; Email Response from Facebook spokesperson, January 21, 2021. Instagram enables users to “filter” comments by keywords or preset filters,164“How Do I Filter Out Comments I Don’t Want to Appear on My Posts on Instagram?,” Instagram Help Center, accessed September 2020, help.instagram.com/700284123459336?helpref=search&sr=1&query=filter%20by%20keywords&search_session_id=5573323a3faba2dd4fb569b36e18a66b “mute” posts or stories, and “mute” accounts entirely.165“How Do I Mute or Unmute Someone on Instagram?,” Instagram Help Center, accessed September 2020, help.instagram.com/469042960409432 In other words, muting, filtering, and snoozing overlap in functionality but remain distinct, which is profoundly confusing for users interacting with these features across services.

Blocking and muting, while critically useful features, can have serious drawbacks for vulnerable users, especially journalists and writers. These features can make it harder for targets to assess the risk they are truly facing because they can no longer see if abuse is ongoing, or if it has escalated to threats of physical or sexual violence, doxing, etc. When abusers can see that they have been blocked, they often create new accounts and ramp up abuse.166Elon Green, “Why Blocking Trolls Doesn’t Work,” Time, August 18, 2016, time.com/4457275/twitter-blocking-troll-failure/ As journalist Davey Alba explained in an interview with the Committee to Protect Journalists: “I was blocking a lot of people on Twitter for a while. That ended up being weaponized against me because people started making new accounts saying, ‘Oh, of course you blocked me, you don’t want to hear different points of view.’ So, I switched to muting accounts instead.”167Lucy Westcott, “NY Times Reporter Davey Alba on Covering COVID-19 Conspiracy Theories, Facing Online Harassment,” CPJ, May 21, 2020, cpj.org/2020/05/ny-times-reporter-davey-alba-on-covering-covid-19/ Blocking and muting can be impractical for journalists and writers, who need access to their audiences, sources, and subjects. Cutting those people off can mean missing important information.

Restricting, a feature that Instagram introduced in 2019, addresses several of the drawbacks of blocking and muting outlined above. By restricting an abusive account, the targeted user places all comments from that account behind a screen, which targets can then choose to review and decide whether to publish, delete, or leave “pending” indefinitely. What distinguishes restricting from blocking is that abusers are not alerted to the fact that their ability to communicate with a target has been restricted. Restricting is also different from muting in that the abuser is the only person who can see the restricted content—targeted users and all other users cannot.168Katy Steinmetz, “What to Know About Restrict, Instagram’s New Anti-Bullying Feature,” Time, July 8, 2019, time.com/5619870/instagram-bullying-restrict/ It is worth noting that even within one company, the term “restrict” has different meanings and parameters. On Facebook, users can place an abusive “friend” on a “restricted list” without unfriending that account entirely—which does not correspond to Instagram’s restriction feature.169“How Do I Add or Remove Someone From my Restricted List on Facebook?,” Facebook Help Center, accessed November 2020, facebook.com/help/206571136073851/?ref=u2u

Recommendations: Platforms need to improve and standardize blocking, muting, and restricting features that help users limit their exposure to abusive content and accounts. Specifically, platforms should:

  • Offer all three mechanisms—blocking, muting, and restricting—and apply each of these mechanisms consistently across all different forms of communication, including comments, DMs, tags, and mentions, and across desktop and mobile apps, regardless of device or browser type.
    • Like Twitter and Instagram, Facebook should offer the option to mute by specific kinds of content, such as keywords and emojis.
    • Twitter, Facebook, and Instagram should allow users to mute (rather than just block) DMs.
    • Like Instagram, Twitter and Facebook should allow users to block, mute, or restrict accounts by flagging comments in bulk.
    • Twitter and Facebook should offer functionalities akin to Instagram’s new restrict feature.
  • Convene a multi-stakeholder coalition of technology companies, civil society organizations, and vulnerable users—or enlist an existing coalition such as the Global Network Initiative,170“Global Network Initiative,” Global Network Initiative, Freedom of Expression and Privacy, July 26, 2020, globalnetworkinitiative.org/ Online Abuse Coalition,171“Coalition on Online Abuse,” International Women’s Media Foundation, accessed March 2021,  iwmf.org/coalition-on-online-abuse/ or Trust & Safety Professional Association172“Overview,” Trust and Safety Professional Association, accessed March 2021, tspa.info/—to standardize the terms and language used to describe these features based on their core functionalities and license them openly for reuse.

Mitigating risk: Muting is relatively low stakes, but blocking and restricting are riskier, potentially inhibiting transparency and public discourse. While the intended use of blocking or restricting is to limit exposure to an abusive account, these features can also be used to buffer a user, including a public official, from legitimate criticism or from ideas they do not like. A user who has been blocked by another user on Twitter, for example, can discover that they have been blocked because the two parties can no longer communicate. But a user who has been restricted by another user on Instagram has no way of knowing that their restricted content has been hidden—a highly effective way to curb retaliation and the escalation of abuse, but at the cost of transparency. To mitigate these outcomes, restriction should be strictly confined to limiting the visibility of comments posted by a restricted account in response to posts published by the restricting account. Restriction should not affect a restricted account’s ability to comment anywhere else on the platform. Instagram, which is the only platform to currently offer restriction, got this balance right; other platforms that introduce restriction should follow suit.173Kelly Wynne, “What is Instagram’s Restrict Feature and How to Use it,” Newsweek, October 2, 2019, newsweek.com/what-instagrams-restrict-feature-how-use-it-1462663 (“If a user chooses to restrict any individual, all of their future comments will be invisible to the public. This only pertains to comments on posts by the person who restricted them.”)

From a free expression standpoint, it is important to recognize that blocking does not stop someone from speaking freely on a social media platform but rather prevents one person from communicating directly with another. PEN America believes that most users of individual social media accounts are entitled to limit direct communication with other users, especially those engaging in abusive conduct. From the standpoint of transparency and accountability, however, the situation becomes more complicated for the professional accounts of public officials and government institutions. Some public officials have used the blocking feature to cut off critics, including their constituents.174“In August, ProPublica filed public-records requests with every governor and 22 federal agencies, asking for lists of everyone blocked on their official Facebook and Twitter accounts. The responses we’ve received so far show that governors and agencies across the country are blocking at least 1,298 accounts… For some, being blocked means losing one of few means to communicate with their elected representatives… Almost every federal agency that responded is blocking accounts.” Leora Smith, Derek Kravitz, “Governors and Federal Agencies Are Blocking Nearly 1,300 Accounts on Facebook and Twitter,” ProPublica, December 8, 2017, propublica.org/article/governors-and-federal-agencies-are-blocking-accounts-on-facebook-and-twitter In 2019, the Knight First Amendment Institute filed a lawsuit on behalf of Twitter users blocked by then-President Donald Trump. The court ruled that it was unconstitutional, on First Amendment grounds, for the president to block followers because Twitter is a “designated public forum” and @realDonaldTrump was “a presidential account as opposed to a personal account.”175Knight First Amendment Inst. at Columbia Univ. v. Trump, No. 1:17-cv-5205 (S.D.N.Y. 2018)

It should be noted, however, that public officials who identify as women, LGBTQIA+, BIPOC, and/or members of religious or ethnic minorities are disproportionately targeted by online abuse and, in some cases, have been driven from public service.176Lucina Di Meco, “#ShePersisted Women, Politics & Power In The New Media World,” #ShePersisted, Fall 2019,
static1.squarespace.com/static/5dba105f102367021c44b63f/t/5dc431aac6bd4e7913c45f7d/1573138953986/191106+SHEPERSISTED_Final.pdf; Abby Ohlheiser, “How Much More Abuse Do Female Politicians Face? A Lot,” MIT Technology Review, October 6, 2020, technologyreview.com/2020/10/06/1009406/twitter-facebook-online-harassment-politicians/; “Why Twitter Is a Toxic Place for Women,” Amnesty International, accessed January 20, 2021, amnesty.org/en/latest/research/2018/03/online-violence-against-women-chapter-1/
While all public officials are subject to the same obligations regarding transparency and accountability, many of the recommendations outlined in this report would significantly reduce the burden of online abuse for public officials and their staff, without sacrificing their obligations to their constituents.

Through the years, we have experienced ups and downs in our relationships with tech companies, and we often still struggle to find the right connections to deal effectively with digital security incidents. We are keen to work with platforms in designing and enabling sustainable and scalable mechanisms to protect some of the most at-risk individuals and organizations in the world.

Daniel Bedoya, director of the global 24-7 Digital Security Helpline for nonprofit Access Now

Reporting: Revamping the user experience

The challenge: The ability of targeted users—and their allies—to report abusive content and accounts is foundational for reducing online harms and a fundamental part of any system of accountability. The goal of reporting is to ensure that abusive content is taken down and that abusive accounts face consequences. But user reporting is just one part of the larger content moderation system, which is often ineffective in ensuring that content that meets the threshold of abuse and violates platform policies is taken down.177“Toxic Twitter-The Reporting Process,” Amnesty International, 2018, amnesty.org/en/latest/research/2018/03/online-violence-against-women-chapter-4/

Content moderation is a complex, imperfect process, and the line between harassment and combative but legitimate critique is often murky. Civil society organizations hear regularly from writers, journalists, artists, and activists about the mistaken removal of creative or journalistic content that does not violate platform policies, while violative content flagged as abuse remains. In addition, the removal process is often opaque. There is an urgent need to reform the larger content moderation process to make it more effective, equitable, and accountable, and multiple excellent reports have provided robust recommendations.178Paul Barrett, “Who Moderates the Social Media Giants?,” NYU Stern Center for Business and Human Rights, June 2020, static1.squarespace.com/static/5b6df958f8370af3217d4178/t/5ed9854bf618c710cb55be98/1591313740497/NYU+Content+Moderation+Report_June+8+2020.pdf; Robyn Caplan, “Content or Context Moderation,” Data & Society, 2018, datasociety.net/wp-content/uploads/2018/11/DS_Content_or_Context_Moderation.pdf; “Freedom and Accountability: A Transatlantic Framework for Moderating Speech Online,” Annenberg Public Policy Center, 2020, annenbergpublicpolicycenter.org/feature/transatlantic-working-group-freedom-and-accountability/; Lindsay Blackwell et al., “Classification and Its Consequences for Online Harassment: Design Insights from Heartmob,” Association for Computing Machinery 1, no. CSCW (December 2017): 24, dl.acm.org/doi/10.1145/3134659; Kate Klonick, “The New Governors: The People, Rules, and Processes Governing Online Speech,” Harvard Law Review, 131, no. 6 (2018): 1598-1670, harvardlawreview.org/wp-content/uploads/2018/04/1598-1670_Online.pdf; Robert Gorwa, Reuben Binns, and Christian Katzenbach, “Algorithmic content moderation: Technical and political challenges in the automation of platform governance,” Big Data & Society 7, no. 1 (January 2020): 1-15, doi.org/10.1177/2053951719897945; Joseph Seering et al., “Moderator engagement and community development in the age of algorithms,” New Media and Society, Vol 21, Issue 7 (2019), doi.org/10.1177/1461444818821316; Sarah Myers West, “Censored, suspended, shadowbanned: User interpretations of content moderation on social media platforms,” New Media & Society 21, no. 7 (July 2019): 1417–43, doi.org/10.1177/1461444818773059; Gennie Gebhart, “Who Has Your Back? Censorship Edition 2019,” Electronic Frontier Foundation, June 12, 2019, eff.org/sv/wp/who-has-your-back-2019; “Use of AI in Online Content Moderation,” Cambridge Consultants on behalf of OFCOM, 2019, ofcom.org.uk/__data/assets/pdf_file/0028/157249/cambridge-consultants-ai-content-moderation.pdf Rather than tackling the broader subject of content moderation in this report, PEN America focuses on examining reporting features from the standpoint of the user experiencing the abuse and doing the reporting.

In PEN America’s 2017 survey studying the impact of online harassment on writers and journalists, of the 53 percent of respondents who had alerted social media platforms to the harassment they were experiencing, 71 percent found the platform unhelpful.179“Online Harassment Survey: Key Findings​,” PEN America (blog), April 17, 2018, pen.org/online-harassment-survey-key-findings/ Three years later, many of the experts and journalists PEN America consulted for this report concurred. “I’d say it’s 50/50 when I report, in terms of whether or not Twitter responds with action,” says journalist Emily Burack.180Emily Burack, interview with PEN America, June 15, 2020. Journalist Jami Floyd has found trying to contact the company “a waste of time.”181Jami Floyd, interview with PEN America, June 6, 2020. According to a 2020 study of online hate and harassment conducted by the Anti-Defamation League and YouGov, 77 percent of Americans want companies to make it easier to report hateful content and behavior, up from 67 percent in 2019.182“Online Hate and Harassment: An American Experience,” Anti-Defamation League, June 2020, adl.org/media/14643/download

Reporting features are often confusing and labor-intensive, placing undue burden on targets of abuse. There is very little consistency across platforms—or even within platforms—about how reporting features work.183Caroline Sinders, Vandinika Shukla, and Elyse Voegeli, “Trust Through Trickery,” Commonplace, January 5, 2021, doi.org/10.21428/6ffd8432.af33f9c9 All platforms have policies governing acceptable conduct and content, but when users are in the process of reporting, they rarely have quick and easy access to relevant policies.184Facebook’s reporting feature is the most useful in that it includes an excerpt of relevant policies, but no link to the policies themselves; Twitter, within its reporting feature, includes a link to its overall policy page, but not to specific relevant pages, nor does it include relevant excerpts. Instagram includes neither links to policies, nor excerpts. Figuring out if a specific piece of content actually violates a particular policy can be confounding and time-consuming. “Explaining policies would help users interpret criticism versus harassment,” says Jason Reich, vice president of corporate security at The New York Times.185Jason Reich, interview with PEN America, June 9, 2020.

These challenges are exacerbated by reporting features that often direct users to select from preset categories of harmful content that rarely align, in language or concept, with specific policies or with the actual experiences of users. For instance, when users report content on Twitter, they must indicate how a specific account is harmful by selecting from preset options that include “targeted harassment,” which is separate from “posting private information,” “directing hate,” “threatening violence,” or “being disrespectful or offensive.” Nowhere in the reporting process—or in its “abusive behavior” policy—does Twitter explain what it means by “targeted harassment” (as opposed to just “harassment” or “abuse”).186“Abusive Behavior,” Twitter, accessed October 2020, help.twitter.com/en/rules-and-policies/abusive-behavior This is just one example of many that PEN America found across Twitter, Facebook, and Instagram.

Some platforms funnel all reporting through a single channel. Others treat specific abusive tactics separately—distinguishing impersonation, doxing, and blackmail from harassment and requiring users to provide additional evidence to report such cases. Many users find themselves bombarded by hundreds or thousands of abusive messages in the midst of a coordinated campaign, yet few platforms allow them to report abusive content or accounts in bulk, forcing them to manually report individual content and accounts piecemeal.187Caroline Sinders, Vandinika Shukla, and Elyse Voegeli, “Trust Through Trickery,” Commonplace, January 5, 2021, doi.org/10.21428/6ffd8432.af33f9c9 The Facebook Civil Rights Audit, an independent investigation of the platform’s policies and practices through the lens of civil rights impact published in 2020, advocated for the introduction of bulk reporting of harassment, but Facebook has not acted on this recommendation.188Laura W. Murphy et al., “Facebook’s Civil Rights Audit—Final Report,” (July 8, 2020), about.fb.com/wp-content/uploads/2020/07/Civil-Rights-Audit-Final-Report.pdf; Laura W. Murphy et al., “Facebook’s Civil Rights Audit – Final Report,” (June 30, 2019), about.fb.com/wp-content/uploads/2019/06/civilrightaudit_final.pdf

Research for this report revealed a range of user needs that are not currently being met. Some writers and journalists would like to see a quicker, easier, one-click reporting feature. Others emphasize the importance of being able to add context, including the migration of online threats to other communication channels, which is a red flag for heightened risk. Still others want to be able to indicate when they were being attacked in a coordinated way across platforms (especially platforms owned by the same company) or to clarify, for example, why a particular insult could actually be a coded hateful slur.189Laura E. Adkins, interview with PEN America, June 15, 2020; Jane R. Eisner, interview with PEN America, May 27, 2020; Ela Stapley, interview with PEN America, May 15, 2020; Interviews with PEN America, May—August 2020. Technologist Caroline Sinders argues that users should be able to pull up past reports and to draft, reopen, and review reports.190Caroline Sinders, interview with PEN America, June 4, 2020.

Existing features and tools: Twitter, Facebook, and Instagram all offer reporting features. A Facebook or Instagram user whose content has been removed can appeal and then escalate their case to the Facebook Oversight Board, a nominally independent body that launched in 2020 that will review a small subset of content moderation decisions and has the power to reinstate content. But no platform gives users who report abusive content a formal mechanism to appeal or escalate decisions for cases in which the abusive content they reported is not removed.

Twitter comes closest, offering designated communication channels for select newsrooms and nonprofits.191Email response from Twitter spokesperson, October 30, 2020. Facebook has what a spokesperson calls “various, purposefully non-public channels available to specific partners.”192Email response from Facebook spokesperson, January 21, 2021. Essentially, a system has emerged in which only individuals or institutions, such as newsrooms and nonprofits, with personal connections at social media platforms are able to escalate individual cases. The ad-hoc, informal, and interpersonal nature of these escalation channels is inherently unpredictable, inequitable, and impossible to scale. “Through the years, we have experienced ups and downs in our relationships with tech companies, and we often still struggle to find the right connections to deal effectively with digital security incidents,” says Daniel Bedoya, director of the global 24-7 Digital Security Helpline for nonprofit Access Now. “We are keen to work with platforms in designing and enabling sustainable and scalable mechanisms to protect some of the most at-risk individuals and organizations in the world.”193Email from Daniel Bedoya (Access Now), January 28, 2021.

Recommendations: Platforms should revamp reporting features to be more user-friendly, responsive, and trauma-informed. Specifically, platforms should:

  • Ensure clarity and consistency between reporting features and policies within platforms. When reporting, users usually select from preset choices to indicate how the content violates the rules. The language used in these preset choices to describe prohibited abusive tactics must be harmonized with the language used in platform policies. When using the reporting features, users should be able to quickly and easily access relevant policies so they can check, in real time, whether the content they are reporting likely violates those policies.
  • Streamline the reporting process. Platforms should streamline the process by creating a single channel for reporting abusive content or accounts (rather than distinct and divergent channels for harassment, doxing, impersonation, etc.) and requiring as few mandatory steps as possible.
  • Create a flexible and responsive report management system. For users seeking more in-depth engagement with the reporting process, often because they are facing complex or coordinated abuse, platforms should:
    • Allow users to create a draft of a report, add to it later, and combine multiple reports.
    • Enable users to see all their past reports and review where they are in the reporting process.
    • Offer an option for users to include additional context—to explain cultural or regional nuances in language, for example, or to flag symbols or images being used to abuse or spread hate.
  • Add bulk reporting. In recognition of the coordinated nature of some harassment campaigns, platforms need to give users the ability to report multiple abusive accounts and content in bulk, to reduce the burden of piecemeal reporting.
  • Create a formal, publicly known appeals or escalation process. When users report abusive content that they perceive to be in violation of platform policies and that content is not taken down, they should have accessible avenues to appeal or escalate their case. The appeal or escalation process should be formal, public, integrated into the reporting process, and available to all users who report harassment. When users file an appeal or escalation, they should be able to amend their case, adding context or additional related abusive content.
  • Provide prompts offering additional support. When used in good faith, reporting is a signal that a user is being harmed on a platform. Users who report abuse should be nudged toward additional in-platform features to help mitigate abuse, as well as toward external resources, ideally filtered by identity, location, and needs (see “Anti-abuse help centers,” below).

Mitigating risk: It is important to recognize that bulk reporting can be weaponized. Reporting has been wielded by abusers to trigger account suspensions, shutdowns, and the removal of posts. Offering the ability to report multiple pieces of content and accounts without requiring users to designate specific, violating content for each one may exacerbate this problem.194“Online Harassment of Reporters: Attack of the Trolls,” Reporters Without Borders, accessed August 2020, rsf.org/sites/default/files/rsf_report_on_online_harassment.pdf Users whose content has been taken down or whose accounts have been suspended due to malicious reporting can already appeal to platforms to have their content or account reinstated, though the process is not always effective and needs significant improvement (see “Appeals,” below). Users can also turn to civil society organizations that have informal escalation channels with the platforms. However, the introduction of bulk reporting necessitates further mitigation. Platforms may need to limit how many accounts or pieces of content can be reported at a time, or to activate the bulk reporting feature only when automated systems identify an onslaught of harassment that bears hallmarks of coordinated inauthentic activity. Ultimately, risk assessment, consultation with civil society, and user testing during the design process would reveal the most effective mitigation strategies.

When you’re experiencing stress, your body goes into an alarm state. All your key systems slow down so that your emergency system can work, and you either freeze, run away, or prepare to fight back. In an emergency mode you’re not thinking rationally.

Elana Newman, professor of psychology at the University of Tulsa and research director at the Dart Center for Journalism and Trauma

SOS button and emergency hotline: Providing personalized support in real time

The challenge: Some abusive tactics—such as threats of physical or sexual violence or massive coordinated campaigns—can put the targeted user into such an acute state of distress that they have difficulty navigating the attack in the moment. “When you’re experiencing stress, your body goes into an alarm state,” says Elana Newman, professor of psychology at the University of Tulsa and research director at the Dart Center for Journalism and Trauma. “All your key systems slow down so that your emergency system can work, and you either freeze, run away, or prepare to fight back. In an emergency mode, you’re not thinking rationally.”195Elana Newman, interview with PEN America, August 28, 2020.

Platforms face a design challenge in meeting the needs of both non-traumatized users seeking to report problematic content and traumatized, actively distressed targets contending with severe abuse, whose needs are quite different and currently poorly supported. The majority of writers, journalists, and other experts we interviewed emphasized the platforms’ lack of responsiveness and stressed the urgent need for customized support in real time.196Ela Stapley, Pamela Grossman, Jasmine Bager, Talia Lavin, Jason Reich, Carrie Goldberg, Lu Ortiz, Lucy Westcott, Jaclyn Friedman, and many others. Interviews with PEN America, May 2020—February 2021. Users need a way to indicate that they are experiencing extreme online abuse and a mechanism for urgently accessing personalized assistance.

Existing features and tools: While Twitter, Facebook, and Instagram provide users with various features to respond to online abuse, many of them discussed in depth throughout this report, these features are rarely designed to reduce trauma and its triggers. All three platforms provide self-guided support experiences, eschewing costlier approaches that would better aid those experiencing ongoing or extreme abuse. None offer customer support staffed by human beings for users experiencing abuse. Facebook users have to dig through over a hundred forms, find the one that most closely corresponds to their problem, fill it out, and wait. Some of these forms, such as those for impersonation or images that violate privacy rights, correspond to specific abusive tactics, but none explicitly deal with online harassment.197Kristi Hines, “How to Contact Facebook and Get Support When You Need It [Ultimate Guide],” Post Planner, February 4, 2020, postplanner.com/how-to-contact-facebook-to-get-support/; Steven John, “How to Contact Facebook for Problems with Your Account and Other Issues,” Business Insider, June 14, 2019, businessinsider.com/how-to-contact-facebook-problems-with-account-other-issues Twitter also funnels users to forms, though the company does at least group the forms for reporting abuse in one place.198“Contact Us,” Help Center, Twitter Support, accessed January 20, 2021, help.twitter.com/en/contact-us Instagram theoretically has a customer support email, but its users are unlikely to receive a reply from a human being.199Steven John, “How to Contact Instagram for Help with Your Account, or to Report Other Accounts,” Business Insider France, August 1, 2019, businessinsider.fr/us/how-to-contact-instagram?op=1

Several platforms have experimented with panic buttons. In 2010, the Child Exploitation and Online Protection Center (CEOP) in the U.K. lobbied Facebook to offer a panic button to instantly provide guidance for children being bullied or threatened online. Facebook resisted but ultimately bowed to public pressure and created a convoluted workaround: a separate app that, once installed, added a tab with CEOP’s logo to a child’s account profile. If clicked, the tab reportedly directed children to Facebook’s “Safety for teens” page and a panic button to report abuse to law enforcement.200Martin Bryant, “Facebook Gets a ‘Panic Button’. Here’s How it Works,” The Next Web, July 12, 2010, thenextweb.com/uk/2010/07/12/facebook-gets-a-panic-button-heres-how-it-works/l; Caroline McCarthy, “Facebook to Promote New U.K. Safety App,” CNET, July 12, 2010, cnet.com/news/facebook-to-promote-new-u-k-safety-app/ It remains unclear how widely this feature was used or whether it was effective in supporting children; it no longer seems to be available on Facebook.

Panic buttons that are not directly integrated into a platform’s primary user experience are ineffective. As digital safety adviser Ela Stapley warns: “Most of these panic buttons don’t work because users don’t use the app.”201Ela Stapley, interview with PEN America, May 15, 2020. In January 2020, the dating app Tinder launched a panic button integrated directly in its platform, allowing users concerned about their safety to tap the button to alert a third-party company called Noonlight, which “will reach out to check on the user and alert emergency responders if needed.”202Dalvin Brown, “Tinder Is Adding a Panic Button for When Bad Dates Go Horribly Wrong,” USA Today, January 23, 2020, eu.usatoday.com/story/tech/2020/01/23/tinder-launch-panic-button-so-users-feel-safer-meeting-strangers/4551244002/; Rachel Siegel, “You Swiped Right But It Doesn’t Feel Right: Tinder Now Has a Panic Button,” The Washington Post, January 23, 2020, washingtonpost.com/technology/2020/01/23/tinder-panic-button/ (“Tinder users who add Noonlight to their profiles can enter information about a meet up, such as whom and where they are meeting. If a user taps the panic button, Noonlight will prompt them to enter a code. If the user doesn’t follow up, a text will come through from Noonlight. If there’s no response, Noonlight will put in a call. And if there’s still no answer, or other confirmation of an emergency, Noonlight will summon the authorities.”) Tinder has not yet published data on the use or efficacy of this promising feature.

Recommendations:

  • Platforms should create an SOS button that users could activate to instantly trigger additional in-platform protections and to access external resources. Users could proactively set up and customize these protections in advance, including:
    • Turning on or turning up toxicity filters (see “Building a shield and dashboard,” above).
    • Tightening security and privacy settings (see “Safety modes,” above).
    • Documenting abuse with one click or automatically (see “Documentation,” above).
    • Activating a rapid response team to provide support (see “Rapid response teams and delegated access,” above).
    • Accessing external resources such as emergency mental health counseling, legal counseling, and cybersecurity help (see “Anti-abuse help centers,” below).
  • Platforms should create an Emergency hotline (phone and/or chat) that provides users facing extreme online abuse (such as cyberstalking or a coordinated mob attack) with personalized, trauma-informed support in real time.
  • To ensure user-friendliness and accessibility, platforms should fully integrate an SOS button and emergency hotline directly into the primary user experience. Better still, platforms could use automated detection to nudge users about these features, as Facebook does with its suicide prevention efforts.203“Suicide Prevention,” Facebook Safety Center, facebook.com/safety/wellbeing/suicideprevention

Journalists get frustrated when [an anti-abuse] tool they found useful suddenly stops working because it has run out of funding. They also become disheartened when the tool is buggy or the user experience is bad. They just want a tool that works well and is easy to set up.

Ela Stapley, digital safety adviser

Anti-abuse help centers: Making resources and tools accessible

The challenge:  Many of the writers and journalists in PEN America’s network are often unaware of the in-platform features and third-party tools that already exist to mitigate online abuse. In a 2017 survey of writers and journalists who had experienced online abuse, PEN America found that nearly half of respondents reported changing social media settings only after they were harassed.204“Online Harassment Survey: Key Findings,” PEN America, accessed September 2020, pen.org/online-harassment-survey-key-findings/ As reporter Kate Steinmetz wrote in an article in Time highlighting Instagram’s new anti-harassment features: “The company has launched other tools meant to help users protect themselves from bullies in the past—like a filter that will hide comments containing certain keywords that a user designates—but many remain unaware that such tools exist.”205Katy Steinmetz, “What to know about Restrict, Instagram’s New Anti-Bullying Feature,” Time, July 8th 2019, time.com/5619870/instagram-bullying-restrict/

Third-party tools that have emerged to counter online abuse (analyzed throughout this report) are urgently needed and very welcome, but they, too, face an array of challenges. Most are still in early stages of development, and few are sufficiently well financed to be as effective as they could be. As digital safety adviser Ela Stapley explains: “Journalists become disheartened when the tool is buggy or the user experience is bad. They also get frustrated when a tool they found useful suddenly stops working because it has run out of funding. They just want a tool that works well and is easy to set up.”206Ela Stapley, email to PEN America, February 2, 2021.

Some of these tools involve costs for the consumer, an insurmountable obstacle for many staff and freelance journalists in an industry under intense financial pressure. Any anti-abuse tool that must be linked to sensitive accounts, such as social media or email, should be audited by independent cybersecurity experts, but this cost can be prohibitive for the developers. Perhaps the biggest challenge, however, is that social media platforms fail to highlight, integrate, or support the development of these third-party tools.207Facebook’s data lockdown is a disaster for academic researchers, The Conversation, April 2018, theconversation.com/facebooks-data-lockdown-is-a-disaster-for-academic-researchers-94533

Existing features and tools: Twitter, Facebook, and Instagram all have help centers, with some content specifically focused on online abuse. Both Twitter’s multipage “Abuse”208“Safety and Security,” Twitter, accessed January 13, 2021, help.twitter.com/en/safety-and-security#abusesection and Instagram’s “Learn how to address abuse”209“Learn How to Address Abuse,” Instagram Help Center, accessed January 13, 2021, help.instagram.com/527320407282978/?helpref=hc_fnav&bc[0]=Instagram%20Help&bc[1]=Privacy%20and%20Safety%20Center page are relatively easy to find, but the guidance they offer is bare-bones and general. On Twitter’s page headed “How to help someone with online abuse,”210“How to Help Someone with Online Abuse,” Twitter Help Center, accessed January 22, 2021 help.twitter.com/en/safety-and-security/helping-with-online-abuse#:~:text=Report%20content%20to%20us,violations%20of%20the%20Twitter%20Rules for example, the platform does not explain that users can use TweetDeck to give allies delegated access to their accounts to help manage abuse. Instagram does not explain the strategic differences between blocking, muting, filtering, and restricting. Facebook’s guidance and resources for navigating abuse are more robust yet much harder to find and more confusing to parse. Its “Abuse resources”211“Abuse Resources,” Facebook Help Center, accessed January 13, 2021 facebook.com/help/726709730764837/abuse-resources/?helpref=hc_fnav page, within the help center, is just a brief Q&A that could significantly benefit from more direct, specific, and nuanced guidance. Its section on non-consensual intimate imagery212“Not Without My Consent,” Facebook Safety Center, accessed January 13, 2021, facebook.com/safety/notwithoutmyconsent is more robust but hard to find, and it does not address the myriad other abusive tactics that adult users face. Most of these anti-harassment resources exist outside the primary user experience, and users are not alerted to their existence with proactive nudges. Many of them seem to be primarily aimed at children, parents, and educators rather than adults.213“Put a Stop to Bullying,” Facebook, accessed December 8, 2020, facebook.com/safety/bullying

While platform help centers do not specifically address the needs of journalists, a number of the third-party tools analyzed throughout this report do. Some have been built by research teams within big technology companies.214“A safer internet means a safer world,” Jigsaw, accessed December 8, 2020, jigsaw.google.com A handful were developed by the private sector and seem to have viable revenue models.215“About Us,” DeleteMe, accessed December 8, 2020, joindeleteme.com/about-us “Delete many tweets with one click!,” TweetDeleter, accessed December 8, 2020, tweetdeleter.com But the majority of these tools and services have been built with limited resources by universities, nonprofits, and/or technologists launching startups.216About HeartMob,” HeartMob, accessed December 8, 2020, iheartmob.org/about; “JSafe: Free mobile application for women journalists to report threats,” The Coalition For Women in Journalism, accessed December 8, 2020, womeninjournalism.org/jsafe; Katilin Mahar, Amy X. Zhang, David Karger, “Squadbox: A Tool to Combat Email Harassment Using Friendsourced Moderation,” CHI 2018: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, no. 586 (April 21, 2018): 1-13, doi/10.1145/3173574.3174160; Haystack Group, accessed October 2020, haystack.csail.mit.edu/; “Founding story,” Block Party, accessed December 8, 2020, blockpartyapp.com/about-us/; “Product,” Tall Poppy, accessed March 1, 2021, tallpoppy.com/product/

Recommendations:

  • Platforms should build robust, user-friendly, and easily accessible sections within help centers that expressly address online abuse. Specifically, Twitter, Facebook, and Instagram should:
    • Outline and adequately explain all the features they already offer to address online abuse.
    • Develop content specifically tailored to the needs of writers and journalists, taking into account that they rely on social media to do their work.
    • Provide a visually prominent link directly to this information in the main user experience. Like fire alarms, these links can be unobtrusive but should be right at hand in case they are needed.
    • Use nudges, sign-on prompts, interactive tip sheets, graphics, videos, and quizzes to regularly educate users about in-platform anti-abuse features.
    • Make the information available in multiple languages, as well as in large print and audio, to ensure wider accessibility.
    • Invest in training vulnerable users with specific needs, including journalists and writers, on how to use anti-harassment features.
    • Direct users to external resources, including online abuse self-defense guides,217“Online Harassment Field Manual,” PEN America, accessed September 30, 2020, onlineharassmentfieldmanual.pen.org/; “OnlineSOS Action Center,” OnlineSOS, accessed September 30, 2020. onlinesos.org/action-center. cybersecurity help lines and tools,218“Home,” Tall Poppy, accessed September 2020, tallpoppy.com/; “Digital Security Helpline.” Access Now (blog), accessed September 2020, accessnow.org/help/ mental health services,219“Lifeline,” National Suicide Prevention Lifeline, accessed January 26, 2021, suicidepreventionlifeline.org/; “LGBT National Youth Talkline,” LGBT National Help Center, accessed January 26, 2021, glbthotline.org/talkline.html; “About the National Sexual Assault Telephone Hotline,” RAINN, accessed January 26, 2021, rainn.org/about-national-sexual-assault-telephone-hotline; “Helpline” Vita Activa, accessed January 22, 2021, vita-activa.org/tag/helpline/ and peer support.220“Self Care for People Experiencing Harassment,” HeartMob, accessed October 2020, iheartmob.org/resources/self_care; “Resources for Journalists,” TrollBusters, accessed October 2020, yoursosteam.wordpress.com/resources-for-journalists/ Because direct referral from a global platform will exponentially increase the volume of requests for help, platforms should consult with and support the organizations responsible for staffing and maintaining those resources, as Reddit has done in its partnership with the Crisis Text Line for users at risk of suicide and self-harm.221Sarah Perez, “Reddit Partners and Integrates with Mental Health Service Crisis Text Line,” TechCrunch, March 5, 2020, techcrunch.com/2020/03/05/reddit-partners-and-integrates-with-mental-health-service-crisis-text-line/
  • Platforms should support the creation of promising new third-party tools designed to counter online abuse—especially those built by and for women, BIPOC, and/or LGBTQIA+ technologists with firsthand experience of online abuse—by investing in R&D and providing access to application programming interfaces (APIs), data, and other relevant information.

I want harassment to be as annoying for my harassers as it is for me to report it.

Talia Lavin, journalist

Disarming Abusive Users

The burden of dealing with online abuse often rests squarely on the shoulders of its targets, and those targets are often women, BIPOC, LGBTQIA+, and/or members of religious and ethnic minorities. Its impact, from the strain on mental health to the chilling effects on speech and career prospects, stands in stark contrast to the point-and-click ease with which abusers inflict it.“I want harassment to be as annoying for my harassers as it is for me to report it,”222Talia Lavin, interview with PEN America, May 25, 2020. says journalist Talia Lavin.

Online abuse cannot be addressed solely by creating tools and features that empower the targets of abuse and their allies. Platforms must also actively hold abusive users to account. “By allowing harassers to function with immunity,” they “exacerbate harm,” Soraya Chemaly, executive director of the Representation Project, explained in a 2015 interview with Mic.223Julie Zeilinger, “One of the Biggest Reasons Harassment Persists on Social Media Is One We Never Talk About,” Mic, March 26, 2015, mic.com/articles/113674/one-of-the-biggest-reasons-harassment-persists-on-social-media-is-one-we-never-talk-about

As noted in the introduction and throughout this report, efforts to deter abuse must be balanced against competing priorities: to protect critical speech and prevent the silencing of legitimate dissenting viewpoints, which may include heated debate that does not rise to the level of abuse as well as humor, satire, and artistic expression that can be mistaken for abusive content. To that end, PEN America’s recommendations seek to deter abuse without unduly increasing the platforms’ power to police critical speech, which threatens all users’ free expression rights.

A foundational principle of our analysis is that, although the lines can sometimes seem clear, any person can be a target, a bystander, or an abuser, depending on their behavior.224Justin Cheng et al., “Anyone Can Become a Troll: Causes of Trolling Behavior in Online Discussions,” (2017), cs.stanford.edu/~jure/pubs/trolling-cscw17.pdf Furthermore, it can be useful to distinguish between casual abuse and committed or automated abuse. Casual abuse may include, for example, individuals who are engaged in unfocused nastiness for sport or who get swept up in online anger or vitriol against a target. Committed or automated abuse includes individuals, bots, groups, and state actors engaging in coordinated and premeditated abuse against particular targets, often with particular outcomes in mind. Both are problematic and harmful, but casual abuse warrants a different set of interventions than the strategies being used to battle coordinated inauthentic activity, such as mass content takedowns, rapid account deletions, and forensic investigations.225Craig Timberg, Elizabeth Dwoskin, “Twitter is sweeping out fake accounts like never before, putting user growth at risk,” The Washington Post, July 6, 2018, wapo.st/3ibqvQD; Mae Anderson, “Twitter and Facebook delete foreign state-backed accounts,” PBS, December 20, 2019, pbs.org/newshour/world/twitter-and-facebook-delete-foreign-state-backed-accounts In this section, our recommendations primarily focus on deterring casual abuse.

Nudges: Using design to discourage abuse

The challenge : Social media platforms are built to encourage immediacy, emotional impact, and virality because those characteristics heighten the user engagement that is fundamental to their business models.226“If exposing users to others’ emotions keeps them engaged, and if engagement is a key outcome for digital media, [as a business strategy] digital media companies should try to upregulate users’ emotions by increasing the frequency and intensity of expressed emotions…This is likely to magnify emotion contagion online.” Amit Goldenberg, James J. Gross, “Digital Emotion Contagion,” Trends in Cognitive Sciences 24, no. 4 (April 2020): 316–328,  hbs.edu/faculty/Publication%20Files/digital_emotion_contagion_8f38bccf-c655-4f3b-a66d-0ac8c09adb2d.pdf The result of such incentives, according to Dr. Kent Bausman, a sociology professor at Maryville University, is that social media “has made trolling behavior more pervasive and virulent.”227Peter Suciu, “Trolls Continue to be a Problem on Social Media,” Forbes, June 4, 2020, forbes.com/sites/petersuciu/2020/06/04/trolls-continue-to-be-a-problem-on-social-media/#67f2bf1e3a89 Researchers at the University of Michigan recently highlighted their “concerns about the limitations of existing approaches social media sites use” to curb abuse—namely, restrictive tactics like “removing content and banning users.” They advocate for educating users rather than just penalizing them.228Laurel Thomas, “Publicly shaming harassers may be popular, but it doesn’t bring justice,” Michigan News, April 2020, record.umich.edu/articles/publicly-shaming-harassers-may-be-popular-but-doesnt-bring-justice/

One way to do this is through nudges—interventions that encourage, rather than force, changes in behavior by presenting opportunities for feedback and education.229“Molly Crockett at Yale’s Crockett Lab has suggested that our inability to physically see the emotional reactions of others might encourage negative behavior on social media.” Tobias Rose-Stockwell, “Facebook’s problems can be solved with design,” Quartz, 2018, qz.com/1264547/facebooks-problems-can-be-solved-with-design/; Kathleen Van Royen et al., “‘Thinking Before Posting?’ Reducing cyber harassment on social networking sites through a reflective message,” Computers in Human Behavior 66 (January 2017): 345-352, doi.org/10.1016/j.chb.2016.09.040; Yang Wang et al., “Privacy Nudges for Social Media: An Exploratory Facebook Study,” WWW ‘13 Companion: Proceedings of the 22nd International Conference on World Wide Web (May 2013): 763-770, doi.org/10.1145/2487788.2488038 For example, a user in the process of drafting a post with abusive language could receive a nudge encouraging them to pause and reconsider. Along these lines, Karen Kornbluth, director of the Digital Innovation and Democracy Initiative at the German Marshall Fund, has called for platforms to counter virality by introducing friction—design elements that nudge users by making certain behaviors less convenient or slower—to “make it harder to spread hate and easier to engage constructively online.”230Karen Kornbluth, Ellen P. Goodman, “Safeguarding Digital Democracy: Digital Innovation and Democracy Initiative Roadmap,” March 2020, gmfus.org/sites/default/files/Safeguarding%20Democracy%20against%20Disinformation_v7.pdf; Email response from Karen Kornbluth, August 25, 2020. (According to an email to PEN America, Karen Kornbluth has shifted from the term “light patterns” to the term “empowerment patterns” since the aforementioned article was published.) The use of nudges has the major advantage of preserving freedom of expression, giving users an opportunity to make informed and deliberate decisions about how they choose to act.

Screenshots of Twitter and Instagram’s experiments with nudges which pop up to prompt users to reconsider the language of a potentially abusive post.
Photos are screenshots from Twitter and Instagram

That said, nudges are no panacea. The jury is still out on how effective specific types of nudges actually are,231Cass R. Sunstein, “Nudges That Fail” (July 18, 2016), ssrn.com/abstract=2809658; Samuel Hardman Taylor et al., “Accountability and Empathy by Design: Encouraging Bystander Intervention to Cyberbullying on Social Media,” Proceedings of the ACM on Human-Computer Interaction 3 (November 2019), doi.org/10.1145/3359220; Taylor Hatmaker, “Twitter plans to bring prompts to ‘read before you retweet’ to all users,” TechCrunch, September 24, 2020, techcrunch.com/2020/09/24/twitter-read-before-retweet/; Susan Benkelman, Harrison Mantas, “Can an accuracy ‘nudge’ help prevent people from sharing misinformation?,” Poynter Institute, July 16, 2020, poynter.org/fact-checking/2020/can-an-accuracy-nudge-help-prevent-people-from-sharing-misinformation/; Alessandro Acquisti et al., “Nudges for Privacy and Security: Understanding and Assisting Users’ Choices Online,” ACM Computing Surveys 50, no. 3, (October 2017), doi.org/10.1145/3054926 particularly in the absence of meaningful experimental validation. Furthermore, nudges that depend on automation can be prone to false positives because algorithms are vulnerable to the biases of their authors and the data sets on which they are trained.232“The challenge of identifying subtle forms of toxicity online,” Medium, December 12, 2018, medium.com/the-false-positive/the-challenge-of-identifying-subtle-forms-of-toxicity-online-465505b6c4c9 In the example offered above, for instance, a nudge could mistakenly flag a post as potentially abusive because it depends on an automated system that has trouble distinguishing between a racial slur and a reclaimed term. 

Existing features and tools: In recent years, platforms have piloted the use of nudges to discourage harmful content, especially disinformation. More should be done to apply this approach to reducing harassment and to evaluate its efficacy. Twitter233Nick Statt, “Twitter tests a warning message that tells users to rethink offensive replies,” The Verge, May 5, 2020, theverge.com/2020/5/5/21248201/twitter-reply-warning-harmful-language-revise-tweet-moderation and Instagram234From Instagram: “When someone writes a caption for an Instagram feed post or a comment and our AI detects the caption/comment as potentially offensive, they will receive a prompt informing them that their caption is similar to those reported for bullying. They will have the opportunity to edit their caption/comment before it is posted.” Email response from Instagram spokesperson, January 15, 2021; Eric Ravenscraft, “Instagram’s New Anti-Bullying Nudges Could Actually Work,” OneZero, May 9, 2019, onezero.medium.com/instagrams-new-anti-bullying-nudges-could-actually-work-9811ef41b8cb; are currently experimenting with automation that proactively identifies harmful language and nudges users to rethink a reply before sending it. In an email to PEN America, Facebook states that it is also piloting nudges for harassing content, though we were unable to verify this statement.235From Facebook: “In certain scenarios, Facebook will prompt users to re-review their content prior to posting because it looks similar to previous violating posts. This is generally limited to hate speech and harassment and allows the user to edit the post, post as-is, or remove it altogether;” Email response from Facebook spokesperson, January 21, 2021. NOTE: PEN America was unable to independently verify this claim. While Facebook and Instagram claim that such nudges can help reduce abuse,236Email response from Instagram spokesperson, January 15, 2021; Email response from Facebook spokesperson, January 21, 2021. none of the platforms examined in this report have shared data on the efficacy of these interventions.237“OneZero asked Instagram if its existing filters have had any measurable impact on bullying, but the company declined to share specific numbers.” Eric Ravenscraft, “Instagram’s New Anti-Bullying Nudges Could Actually Work,” OneZero, May 9, 2019, onezero.medium.com/instagrams-new-anti-bullying-nudges-could-actually-work-9811ef41b8cb

Recommendations:

  • Platforms should use nudges to discourage users’ attempts to engage in abusive behavior. One way to do this is to use automation to proactively identify content as potentially abusive and nudge users with a warning that their content may violate platform policies and encourage them to revise it before they post.
  • Platforms should study the efficacy of nudges to curb abuse and publish these findings. Platforms should also communicate clearly and transparently about how the algorithms that inform many of these interventions are trained, including efforts to curb implicit bias.
  • Platforms should give outside researchers access to data—both on the efficacy of nudges and on the data on which algorithms are trained to detect harmful language—so they can independently assess success and flag unintended harm, as well as recommend improvements.

Rules in real time: Educating users and making consequences visible

The challenge:  Norms governing behavior can only work if they have been clearly communicated, understood, and agreed upon by the members of an online community. However, most social media platforms keep their policies governing user behavior in an area distinct from the primary user experience of their product. In an interview with PEN America, Jillian York, director for international free expression at the EFF, says that Twitter’s rules, for example, “are really difficult to find, so most users aren’t even aware of what they are or how to find them before they violate them.” Moreover, a user might break a rule, “and all they’re told is that they broke the rules, not which rule they broke and why or how.”238Jillian York, interview with PEN America, May 21, 2020.

While it is important for platforms to maintain dedicated areas that display all their policies, it is equally critical to fully integrate the most important and relevant guidelines directly within the primary user experience so users can see this information in real time. When users create a new password, for instance, they should not have to go to a separate page to learn about minimum password complexity requirements; that information is included in the same form or window. Similarly, users should be able to quickly check that content complies with key rules before posting without having to click away and search through a separate website. Integrating platform rules could not only reduce casual abuse and increase transparency, but avert the perception of arbitrary or biased enforcement.

Recent research from Stanford University indicates that making community rules more visible, including at the top of comments and discussions sections, increases newcomers’ compliance with them while simultaneously increasing participation.239The Stanford team conducted an experiment randomizing announcements of community rules in large-scale online conversations for a scientific website with 13 million subscribers. Compared with discussions with no mention of community expectations, displaying the rules increased newcomer rule compliance by more than eight percent and increased the participation rate of newcomers in discussions by 70% on average. J. Nathan Matias, “Preventing harassment and increasing group participation through social norms in 2,190 online science discussions,” PNAS 116, no. 2 (2019): 9785-9789, doi.org/10.1073/pnas.1813486116 An internal audit at YouTube found that users actively wanted the platform to create clearer policies to make enforcement more consistent, and to be more transparent about enforcement actions (see “Escalating penalties,” below).240“Making our strikes system clear and consistent,” YouTube, accessed December 2, 2020, blog.youtube/news-and-events/making-our-strikes-system-clear-and

Existing features: If Twitter users want to review the guidelines governing acceptable behavior, they have to go to the overall menu on the platform’s app or desktop version and intuit that they will find these guidelines within the “help center,” which then takes them to a webpage outside the primary user experience. From there, users have to head to a page called “Rules and policies” and review over a dozen distinct pages that are, for reasons that remain unclear, spread out between two sections titled “Twitter’s Rules and policies” and “General guidelines and policies.”241“Help Center,” Twitter, accessed January 2021, help.twitter.com/en/rules-and-policies On Facebook’s app and desktop version, users looking for guidelines have to go to the main menu and embark on a long and winding journey with stops at “help and support,” “help center,” “policies and reporting,” and “about our policies”—none of which actually lead to the platform’s “community standards,” which live on a separate website.242“About our Policies,” Facebook, accessed January 2021, facebook.com/help/1735443093393986/?helpref=hc_global_nav; “Community Standards,” Facebook, accessed October 2020, facebook.com/communitystandards/ On Instagram, from within either the app or desktop version, the authors of this report were unable to locate the platform’s “community guidelines” (which live on a separate website243“Community Guidelines,” Instagram, accessed January 2021, facebook.com/help/instagram/477434105621119/?helpref=hc_fnav&bc[0]=Instagram%20Help&bc[1]=Privacy%20and%20Safety%20Center). Furthermore, as the Facebook Oversight Board asserted, given that Facebook and Instagram belong to the same company, the relationship between Facebook’s extensive “community standards” and Instagram’s shorter “community guidelines” needs to be clarified and their inconsistencies need to be ironed out.244evelyn douek, “The Facebook Oversight Board’s First Decisions: Ambitious, and Perhaps Impractical,” Lawfare, January 28, 2021, lawfareblog.com/facebook-oversight-boards-first-decisions-ambitious-and-perhaps-impractical

Recommendations:

  • Platforms should make their rules—and the consequences for breaking them—easily and directly accessible to users in real time and within the primary user experience.
  • Platforms should use the full suite of design elements—including nudges, labels, and contextual clues—to spotlight relevant rules.
  • Platforms should routinely use policy checkups or reminders, akin to existing interactive privacy checkups on Facebook and Google. Whenever platforms make major changes or updates to their rules governing acceptable behavior, they should proactively call attention to these changes and seek affirmative consent from users.

Graphic by Wikicommons

Escalating penalties: Building an accountability system for abusive users

The challenge: To effectively tackle online abuse, especially by committed abusers, decisive measures like account suspensions and bans are sometimes necessary. They are also fraught. As discussed throughout this report, overzealous suspensions or bans can compromise free expression. Content moderation is inherently imperfect, especially when it relies on automation, and it has repeatedly been weaponized to silence writers, journalists, and activists.245Sam Biddle, “Facebook Lets Vietnam’s Cyberarmy Target Dissidents, Rejecting a Celebrity’s Plea,” The Intercept, December 12, 2020, theintercept.com/2020/12/21/facebook-vietnam-censorship/  For major and minor infractions alike, however, the consequences for violating rules are not readily visible or clearly communicated to users.

“Platforms should add suggestions of consequences of misconduct in the standards,” says law professor Mary Anne Franks. “They need to have clear rules that say this kind of behavior will result, for instance, in a temporary suspension of an account. They need to say what the enforcement will be. Otherwise, compliance with their standards is a recommendation, not an obligation.”246Mary Anne Franks, interview with PEN America, May 22, 2020. Experiments in the gaming industry underscore the efficacy of explaining to abusers exactly which policies their content has violated and why it led to a penalty.247Laura Hudson, “Curbing Online Abuse Isn’t Impossible. Here’s Where We Start,” Wired, May 15, 2014 wired.com/2014/05/fighting-online-harassment/

This is exactly what the Facebook Oversight Board advised in its first set of recommendations, which called for “more transparency and due process for users, to help them understand the platform’s rules,” according to evelyn douek, a lecturer and doctoral candidate at Harvard Law School. “The FOB shows concern that users who have been found to violate their rules simply cannot know what they are doing wrong, whether because Facebook’s policies are not clear or lack detail or are scattered around different websites, or because users are not given an adequate explanation for which rule has been applied in their specific case.”248evelyn douek, “The Facebook Oversight Board’s First Decisions: Ambitious, and Perhaps Impractical,” Lawfare, January 28, 2021, lawfareblog.com/facebook-oversight-boards-first-decisions-ambitious-and-perhaps-impractical; see also “Online harassment and abuse against women journalists and major social media platforms,” ARTICLE 19 (2020): 16-17, article19.org/wp-content/uploads/2020/10/Gender-Paper-Brief-2.pdf

Existing features and tools: Some platforms are developing escalating penalties, but these generally remain nascent and poorly publicized to users. They do not actually constitute a transparent and clearly communicated accountability system. Twitter may suspend accounts for violating its rules, including for engaging in abusive behavior, and may require users to verify account ownership or formally appeal to lift a suspension. The company can also place an account in read-only mode, limiting the ability to tweet, retweet, or like content, and may require a user to delete the violating content to restore full functionality. Subsequent violations, it warns, “may result in permanent suspension,” which is a perplexing oxymoron.249“About Suspended Accounts,” Twitter, accessed October 2020, help.twitter.com/en/managing-your-account/suspended-twitter-accounts; “Our range of enforcement options,” Twitter, accessed November 2020, help.twitter.com/en/rules-and-policies/enforcement-options; “Hateful Conduct Policy,” Twitter Help Center, accessed January 26, 2021, help.twitter.com/en/rules-and-policies/hateful-conduct-policy

In 2019, Instagram began issuing alerts to users whose posts repeatedly violate community guidelines, informing them that their account may be banned if they persist and providing them with a history of the relevant posts and the reasons for their removal.250Jacob Kastrenakes, “Instagram will now warn users close to having their account banned,” The Verge, theverge.com/2019/7/18/20699393/instagram-account-ban-warning-message-moderation-update According to Facebook, the platform issues warnings to users who post content or misuse its features in a way that violates community standards.251“Warnings,” Facebook, accessed October 2020, facebook.com/help/101389386674555 In an email to PEN America, Facebook elaborated: “Time-bound feature limits are the central penalty used on Facebook users. If Facebook removes multiple pieces of content from a user’s profile, Page, or group within a short period of time, they’ll have a short-term restriction placed on their account. This user will continue to receive additional and longer restrictions as long as they keep on violating our Community Standards. Other less frequent penalties include rate limits, education requirements, audience limitation, loss of certain product features (such as the ability to go Live).”252Email response from Facebook spokesperson, January 21, 2021.

Of the platforms analyzed in this report, only Twitter publicly outlines its penalties in its help center. But to better understand Twitter’s accountability system, a user would have to read several distinct policy pages, all of which make heavy use of the word “may.” PEN America could find only the most minimal information on Facebook’s or Instagram’s help centers that lays out each platform’s penalties for violating their policies; we cobbled together the information above from a news article,253Jacob Kastrenakes, “Instagram will now warn users close to having their account banned,” The Verge, theverge.com/2019/7/18/20699393/instagram-account-ban-warning-message-moderation-update an email exchange with both platforms,254Email response from Instagram spokesperson, January 15, 2021; Email response from Facebook spokesperson, January 21, 2021. and a corporate blog post from 2018.255“Enforcing Our Community Standards,” Facebook, accessed February 2021, about.fb.com/news/2018/08/enforcing-our-community-standards/ Facebook and Instagram informed PEN America that they do not “provide full visibility into the penalties to avoid gaming.”256Email response from Instagram spokesperson, January 15, 2021; Email response from Facebook spokesperson, January 21, 2021. Bottom line: It is exceptionally difficult for users to understand the consequences of violating platform policies, even if they can ascertain what those policies are in the first place.

Beyond the platforms analyzed in this report, YouTube offers a model worth exploring. The platform overhauled its penalties into a system of escalating “strikes” after consulting with its users in 2019.257The YouTube Team, “Making Our Strikes System Clear and Consistent,” Youtube Official Blog, February 19, 2019, blog.youtube/news-and-events/making-our-strikes-system-clear-and When a user violates community guidelines for the first time, they receive a warning that explains what content was removed and which policies were violated, and explains next steps. If a user posts prohibited content a second time, they receive a “strike,” which restricts account functionality for one week. After the second strike, users experience further functionality restrictions for two weeks, and after three strikes within a 90-day period, any channel continuing to post violating content will, according to YouTube, be deleted. Appeals, the platform states, are available at every step of the process.258“Community Guidelines strike basics,” YouTube, accessed October 2020, support.google.com/youtube/answer/2802032?hl=en The platform claims that “94% of those who do receive a first strike never get a second one,” though it has not released the raw data to back up this assertion.259“Making our strikes systems clear and consistent,” YouTube, accessed December 2, 2020, blog.youtube/news-and-events/making-our-strikes-system-clear-and While YouTube’s administration of this system remains obscure in places—for example, in its use of automated flagging—it provides a useful template to build on.

Recommendations:

  • Platforms should create a transparent system of escalating penalties for all users, including warnings, strikes, temporary functionality restrictions, and suspensions, as well as content takedowns and account bans.This accountability system should be fully integrated into the primary user experience and clearly visible alongside policies governing user behavior.
  • Platforms should use the full suite of design elements (nudges, labels, contextual clues, etc.) to communicate clearly and consistently with users across all available channels (within platform, via email, etc.) about what rule has been violated, current and potential future penalties, and next steps, including how to appeal.
  • Platforms should convene a coalition of technology companies, civil society organizations, and vulnerable users—potentially leveraging the newly formed Digital Trust & Safety Partnership260Margaret Harding McGill, “Tech giants list principles for handling harmful content,” Axios, February 13, 2021, axios.com/tech-giants-list-principles-for-handling-harmful-content-5c9cfba9-05bc-49ad-846a-baf01abf5976.html?utm_campaign=organic&utm_medium=socialshare&utm_source=twitterto create a baseline set of escalating penalties that can help establish common expectations, which could include:
    • Warnings and strikes. Platforms should adopt a graduated approach to enforcement, issuing warnings and counting strikes before taking the more drastic step of suspending or closing accounts. That said, some violations, like direct incitement to violence, warrant immediate suspension.
    • Temporary suspensions and functionality limitations. Platforms should deploy temporary suspensions and functionality limitations—such as preventing accounts from posting but not from browsing, and suspending accounts for several days or weeks.
    • Coordinated protocols for adjusting these escalating penalties over time, including in response to evolving patterns of abuse or the weaponization of the penalties themselves.

Appeals: Ensuring a transparent and expeditious process

The challenge: Not all content reported as abuse is actually abusive. Reporting systems are regularly weaponized by abusers seeking to intimidate or defame their targets and trigger the removal of posts and suspension of accounts.261Sam Biddle, “Facebook Lets Vietnam’s Cyberarmy Target Dissidents, Rejecting a Celebrity’s Plea,” The Intercept, December 12, 2020, theintercept.com/2020/12/21/facebook-vietnam-censorship/; Katie Notopoulos, “How Trolls Locked My Twitter Account For 10 Days, And Welp,” BuzzFeed News, December 2, 2017 buzzfeednews.com/article/katienotopoulos/how-trolls-locked-my-twitter-account-for-10-days-and-welp Russell Brandom, “Facebook’s Report Abuse button has become a tool of global oppression,” The Verge, September 2, 2014, theverge.com/2014/9/2/6083647/facebook-s-report-abuse-button-has-become-a-tool-of-global-oppression; Ariana Tobin, Madeline Varner, Julia Angwin, “Facebook’s Uneven Enforcement of Hate Speech Rules Allows Vile Posts to Stay Up,” ProPublica, December 18, 2017, propublica.org/article/facebook-enforcement-hate-speech-rules-mistakes Further, perceptions of what does or does not constitute abuse vary among individuals and communities. Content moderators make decisions that reasonable people can disagree with, and content that is flagged in good faith may fall short of violating platform policies.262Jodie Ginsberg, “Social Media Bans Don’t Just Hurt Those You Disagree With—Free Speech Is Damaged When the Axe Falls Too Freely,” The Independent, May 17, 2019, independent.co.uk/voices/free-speech-social-media-alex-jones-donald-trump-facebook-twitter-bans-a8918401.html; Queenie Wong, “Is Facebook censoring conservatives or is moderating just too hard?,” CNET, October 29, 2019 cnet.com/features/is-facebook-censoring-conservatives-or-is-moderating-just-too-hard/ Many users who believe their content was removed as a result of inaccurate or malicious reporting struggle to understand why, have little opportunity to make their case, and can effectively be silenced by the slow restoration of their content.263Jillian C. York, “Companies Must Be Accountable to All Users: The Story of Egyptian Activist Wael Abbas,” Electronic Frontier Foundation, February 13, 2018, eff.org/deeplinks/2018/02/insert-better-title-here

As platforms make much-needed improvements in the ability to flag and remove abusive content, contentious decisions and false positives will inevitably increase. Organizations like the ACLU264Lee Rowland, “Naked Statue Reveals One Thing: Facebook Censorship Needs Better Appeals Process,” ACLU, September 25, 2013, aclu.org/blog/national-security/naked-statue-reveals-one-thing-facebook-censorship-needs-better-appeals?redirect=blog/technology-and-liberty-national-security/naked-statue-reveals-one-thing-facebook-censorship and Ranking Digital Rights265“2019 RDR Corporate Accountability Index,” Ranking Digital Rights, accessed December 2, 2020 rankingdigitalrights.org/index2019/report/freedom-of-expression/ have argued that a transparent, expeditious appeals process for content takedowns is critical for the preservation of free expression in any content moderation process. PEN America supports the adoption of the Santa Clara Principles. Released in 2018 by a coalition of civil society organizations and academics, this proposal calls for the review of appealed content by human beings, the opportunity for users to provide context during the appeals process, and explicit notification of the final outcome, including a clear explanation of the decision.266“The Santa Clara Principles on Transparency and Accountability in Content Moderation,” accessed December 2, 2020, santaclaraprinciples.org/

Existing tools and mechanisms: Twitter, Facebook, and Instagram have made improvements to their appeals process in recent years. In 2018, Twitter committed to emailing suspended users to advise them of the content of violating tweets and details about which rule they broke.267“Toxic Twitter-A Toxic Place for Women,” Amnesty International, 2018, amnesty.org/en/latest/research/2018/03/online-violence-against-women-chapter-4/ In 2019, the platform integrated appeals directly into its mobile app, rather than requiring users to fill out a separate online form, which Twitter claims improved its response speed by 60 percent.268Sarah Perez, “Twitter now lets users appeal violations within its app” TechCrunch, April 2, 2019, techcrunch.com/2019/04/02/twitter-now-lets-users-appeal-violations-within-its-app/ In 2020, Instagram also integrated the ability to appeal disabled accounts and content takedowns directly within its app.269Andrew Hutchinson, “Instagram Launches New Appeals Process for Disabled Accounts, Adds Report Tracking In-App.” Social Media Today, February 12, 2020, socialmediatoday.com/news/instagram-launches-new-appeals-process-for-disabled-accounts-adds-report-t/572122/?utm_content=buffer2663e&utm_medium=social&utm_source=facebook&utm_campaign=buffer&fbclid=IwAR10STYBFES9GFo2j_RfVKyGEjP-vqSoZUr3qpC3OqYXYsEF_8mfoEP3cQ4; “My Instagram Account was Deactivated,” Instagram, accessed October 2020, help.instagram.com/contact/606967319425038; “I don’t think Instagram should have taken down my post,” Instagram, accessed December 2, 2020, help.instagram.com/280908123309761?helpref=search&sr=8&query=appeal&search_session_id=caeea0b60cdbc423589691d79ebeac8e  Since 2018, Facebook users have had the ability to appeal the removal of posts, photos, and videos, as well as the removal of groups, profiles, and pages270“Facebook Updates Community Standards, Expands Appeals Process,” NPR, April 24, 2018, npr.org/2018/04/24/605107093/facebook-updates-community-standards-expands-appeals-process—though users still have to appeal through a separate online form.271“Why would my Facebook Page get taken down or have limits placed on it?,” Facebook, accessed October 2020, facebook.com/help/348805468517220?helpref=search&amp%3Bsr=7&amp%3Bquery=appeals&amp%3Bsearch_session_id=e552f40c1536a1881257db196599f27d ; “My personal Facebook account is disabled,” Facebook, accessed December 2, 2020, facebook.com/help/103873106370583?helpref=related&amp%3Bref=related&amp%3Bsource_cms_id=147434898662568&amp%3Brdrhc; “I don’t think Facebook should have taken down my post,” Facebook, December 2, 2020, facebook.com/help/2090856331203011?helpref=search&amp%3Bsr=15&amp%3Bquery=appeal&amp%3Bsearch_session_id=638faaa1c4c3e5611d4e2b01366b82a2 Both Facebook and Instagram now allow users to escalate appeals for content takedowns to a new, purportedly independent oversight board, which will review a small fraction of submissions.272“How do I appeal Instagram’s decision to the Oversight Board?,” Instagram, accessed December 2, 2020, help.instagram.com/675885993348720?helpref=search&amp%3Bsr=2&amp%3Bquery=appeal&amp%3Bsearch_session_id=caeea0b60cdbc423589691d79ebeac8e; “How do I appeal Facebook’s content decision to the Oversight Board?,” Facebook Help Center, accessed February 25, 2021, facebook.com/help/346366453115924

Despite recent improvements, the platforms still have a long way to go in the transparency, expeditiousness, and open communication of appeals. As Amnesty International’s “Toxic Twitter” report notes, “A detailed overview of the appeals process, including an explicit commitment to respond to all appeals or a timeframe of when to expect a response is not included in any of Twitter’s policies.”273“Toxic Twitter-A Toxic Place for Women,” Amnesty International, 2018, amnesty.org/en/latest/research/2018/03/online-violence-against-women-chapter-4/ The EFF’s annual “Who Has Your Back” report found that only Facebook has a satisfactory commitment to providing “meaningful notice” regarding content and account takedowns and that neither Twitter, Facebook nor Instagram has a satisfactory commitment to “appeals transparency.”274Gennie Gebhart, “Who Has Your Back? Censorship Edition 2019,” Electronic Frontier Foundation, November 7, 2019, eff.org/sv/wp/who-has-your-back-2019#appeals-mechanisms Twitter does not include appeals in its transparency report at all.275“Rules Enforcement,” Twitter, accessed December 2, 2020 transparency.twitter.com/en/reports/rules-enforcement.html#2019-jul-dec While Facebook and Instagram provide information about how much content users appeal and how much of the content is restored, they offer no information about the timeliness of responses or restorations.276“Community Standards Enforcement Report,” Facebook, accessed December 2, 2020 transparency.facebook.com/community-standards-enforcement

Recommendations: PEN America recommends the implementation of significantly more robust and regularized appeals processes for users whose content or accounts have been taken down, restricted, or suspended. Specifically, platforms should:

  • Fully and prominently integrate appeals into the primary user experience, and communicate clearly and regularly with users at every step of the appeals process via notifications within the platform’s desktop and mobile app, as well as through secondary communication channels, like email.
  • Build on the Santa Clara Principles277“The Santa Clara Principles on Transparency and Accountability in Content Moderation,” accessed December 10, 2020, santaclaraprinciples.org/ to ensure that users can add context during the appeals process and that humans review appealed content.
  • Create a formal, adequately resourced escalation channel for expediting appeals to address cases of malicious or inaccurate content takedowns and time-sensitive cases where a delay in restoring content or accounts could be harmful, for example during times of urgent political debate or crises. At the very least, this channel should enable an institution, such as a news outlet or civil society organization, to advocate with the platform on behalf of an individual, such as a journalist or human rights defender.
  • Substantially increase transparency about the way the appeals process works. Specifically, regular transparency reports from platforms should include metrics on how much content users appeal, how much of the appealed content is restored, the timeliness of responses for all appeals, and the timeliness of restoration. Platforms should provide independent researchers with disaggregated data supporting these metrics.

Protesters marching in Washington, D.C. in November 2016. Photo by Lorie Shaull

Methodology

In this report, PEN America lays out the impact of online abuse on the lives, livelihoods, and freedom of expression of writers and journalists and recommends concrete changes that technology companies should make now to better protect all vulnerable people. Our recommendations center on the experiences and needs of United States-based users disproportionately targeted online for their identity and profession and prioritize changes to the design of digital platforms. We base our proposals on in-depth qualitative research, including dozens of interviews and a comprehensive literature review, and on the extensive experience that PEN America has gleaned through its Online Abuse Defense program.

What is online abuse?

PEN America defines online abuse as the “severe or pervasive targeting of an individual or group online with harmful behavior.” “Severe” because a single incident of online abuse, such as a death threat or the publication of a home address, can have serious consequences. “Pervasive” because individual incidents, such as an insult or spamming, may not rise to the level of abuse, but a sustained or coordinated onslaught of incidents like these can cause significant harm. “Harm” can include emotional distress, anxiety, intimidation, humiliation, invasion of privacy, the chilling of expression, professional damage, fear for physical safety, and physical violence.”278“Defining ‘Online Abuse’: A Glossary of Terms,” Online Harassment Field Manual, accessed January 2021, onlineharassmentfieldmanual.pen.org/defining-online-harassment-a-glossary-of-terms/

PEN America’s Online Abuse Defense Program

PEN America is a nonprofit that stands at the intersection of literature and human rights to protect free expression in the United States and worldwide. Our mission is to unite writers and their allies to celebrate creative expression and defend the liberties that make it possible. Our Membership consists of over 7,500 journalists, novelists, nonfiction writers, editors, poets, essayists, playwrights, publishers, translators, agents, and other writing professionals, as well as devoted readers and supporters throughout the United States. In 2017, PEN America conducted a survey of over 230 writers and journalists within our network and found that the majority of respondents who had faced online abuse reported fearing for their safety and engaging in self-censorship; this included everything from refraining from publishing their work to deleting their social media accounts.279“Online Harassment Survey: Key Findings​,” PEN America (blog), April 17, 2018, pen.org/online-harassment-survey-key-findings/ In response, in 2018 we launched our Online Abuse Defense program in the United States, which centers on education, research, and advocacy. We develop resources to equip writers and journalists, as well as their allies and employers, with comprehensive strategies to defend themselves against online abuse. Our Field Manual, articles, and tipsheets have reached over 250,000 people.280“Online Harassment Field Manual,” PEN America, accessed March 2021, onlineharassmentfieldmanual.pen.org/; Viktorya Vilk, “What to Do When Your Employee Is Harassed Online,” Harvard Business Review, July 31, 2020, hbr.org/2020/07/what-to-do-when-your-employee-is-harassed-online; Viktorya Vilk, “What to Do if You’re the Target of Online Harassment,” Slate, June 3, 2020, slate.com/technology/2020/06/what-to-do-online-harassment.html; Viktorya Vilk, “Why You Should Dox Yourself (Sort Of),” Slate, February 28, 2020, slate.com/technology/2020/02/how-and-why-dox-yourself.html The authors of this report have led presentations and workshops on combating online abuse and bolstering digital safety for over 7,000 journalists, writers, editors, academics, lawyers, activists, and others; we also work closely with newsrooms, publishing companies, and professional associations to develop policies, protocols, and training to protect and support writers and journalists. Finally, we conduct research on the impact of online abuse and the solutions to address it, and we advocate for change to reduce online harm so that all creative and media professionals can continue to express themselves freely.

Research Methodology

For this report, PEN America conducted in-depth interviews between April 2020 and February 2021 with over 50 people, including writers and journalists; editors and newsrooms leaders; experts in online abuse and digital safety; researchers and academics who study media, UX, and design; technologists; lawyers; and representatives of technology companies. We conducted a comprehensive cross-disciplinary literature review of over 100 articles, reports, papers, books, and guidelines from academia and civil society, in fields including design, computer science, sociology, human rights, and technology.

We centered our research on the experiences of people disproportionately targeted by online abuse for their identity and/or profession, specifically: 1) writers and journalists whose work requires a public presence online, and 2) women, BIPOC (Black, indigenous, and people of color), LGBTQIA+ (lesbian, gay, bisexual, transgender, queer, intersex, and asexual) people, and/or people who belong to religious or ethnic minorities.281In a 2020 study from the Anti Defamation League and YouGov, 35 percent of respondents reported that the harassment they faced was connected to their gender identity, race or ethnicity, sexual orientation, religion, or disability. Among these groups, respondents who identified as LGBTQ+ reported the highest rates of harassment. Women also cited disproportionate levels of harassment, including more than three times the gender-based harassment experienced by men (37 percent versus 12 percent). “Online Hate and Harassment Report: The American Experience 2020,” ADL, June 2020, adl.org/online-hate-2020 In our recommendations, we prioritize the needs of people at the intersection of these two groups because we know that they experience especially egregious abuse.282Michelle P. Ferrier, “Attacks and Harassment: The Impact on Female Journalists and Their Reporting,” IWMF/TrollBusters, 2018, iwmf.org/wp-content/uploads/2018/09/Attacks-and-Harassment.pdf; see also Lucy Westcott, “‘The threats follow us home’: Survey details risks for female journalists in U.S., Canada,” CPJ, September 4, 2019, cpj.org/2019/09/canada-usa-female-journalist-safety-online-harassment-survey/; for global stats, see also: Julie Posetti et al., “Online violence Against Women Journalists: A Global Snapshot of Incidence and Impacts,” UNESCO, December 1 2020, icfj.org/sites/default/files/2020-12/UNESCO%20Online%20Violence%20Against%20Women%20Journalists%20-%20A%20Global%20Snapshot%20Dec9pm.pdf; “Troll Patrol Findings,” Amnesty International, 2018, decoders.amnesty.org/projects/troll-patrol/findings We contend that when technology companies meet the needs of users most vulnerable to online abuse, they will better serve all of their users.

We focus our analysis specifically on Twitter, Facebook, and Instagram because these are the platforms on which United States–based writers and journalists rely most heavily in their work,283Michelle P. Ferrier, “Attacks and Harassment: The Impact on Female Journalists and Their Reporting (Rep.),” IWMF/TrollBusters, 2018, iwmf.org/wp-content/uploads/2018/09/Attacks-and-Harassment.pdf; “Why journalists use social media,” NewsLab, 2018, newslab.org/journalists-use-social-media/#:~:text=The%20researchers%20found%20that%20eight,media%20in%20their%20daily%20work.&text=About%2073%20percent%20of%20the,there%20is%20any%20breaking%20news; “2017 Global Social Journalism Study,” Cision, accessed February 19, 2021, cision.com/content/dam/cision/Resources/white-papers/SJS_Interactive_Final2.pdf and the platforms on which United States–based users report experiencing the most abuse.284“Online Hate and Harassment Report: The American Experience 2020,” ADL, June 2020, adl.org/online-hate-2020; see also Emily A. Vogels, “The State of Online Harassment,” Pew Research Center, January 13, 2021, pewresearch.org/internet/2021/01/13/the-state-of-online-harassment/ We analyzed in-platform features designed to mitigate online abuse (such as blocking, muting, hiding, restricting, and reporting). And we also identified and analyzed relevant third-party tools, some built by private companies and others by universities and nonprofits (such as Block Party, Tall Poppy, BodyGuard, Sentropy Protect, TweetDeleter, Jumbo, Tune, and many others). Although writers and journalists also experience a significant amount of abuse on private messaging platforms (such as email, text messaging, WhatsApp, and Facebook Groups), these platforms carry their own unique privacy and security challenges and fall outside the scope of this report. While our recommendations are rooted in our research on three major social media platforms, we believe they are useful and relevant to all technology companies that design products to facilitate communication and social interaction.

Our research and recommendations focus on the United States, where PEN America’s expertise in online abuse is strongest, but we fully acknowledge that online abuse is a global problem and we understand the urgent need to find locally and regionally relevant solutions. Several of the technology companies analyzed in this report have a global user base, and one of the central challenges to curtailing online abuse is the blanket application of United States–based rules, strategies, and cultural norms internationally.285“Activists and tech companies met to talk about online violence against women: here are the takeaways,” Web Foundation, August 10, 2020, webfoundation.org/2020/08/activists-and-tech-companies-met-to-talk-about-online-violence-against-women-here-are-the-takeaways/ Throughout this report, we endeavor to account for the ways that changes to features on global platforms could play out in regions and geopolitical contexts outside the United States.

Acknowledgments

This report was written by Viktorya Vilk, program director for Digital Safety and Free Expression; Elodie Vialle, program consultant for Digital Safety and Free Expression; and Matt Bailey, program director for Digital Freedom at PEN America. PEN America’s senior director for Free Expression Programs, Summer Lopez, reviewed and edited the report, as did CEO Suzanne Nossel. James Tager, Nora Benavidez, Stephen Fee, and Dru Menaker provided thoughtful feedback. PEN America would also like to thank the interns whose research, fact-checking, and proofreading contributed significantly to this report: Jazilah Salam, Margaret Tilley, Hiba Ismail, Sara Gronich, Tarini Krishna, Blythe Drucker, Jordan Pilant, Glynnis Eldridge, and Cheryl Hege.

PEN America extends special thanks to the following experts for providing invaluable input on this report: Jami Floyd, senior editor of the Race and Justice unit at New York Public Radio; Dr. Michelle Ferrier, founder of TrollBusters and executive director of Media Innovation Collaboratory; Kat Lo, content moderation lead at Meedan; Azmina Dhrodia, senior policy manager for gender and data rights at World Wide Web Foundation and adviser at Glitch; Jamia Wilson, vice president and executive editor at Random House; Ela Stapley, digital safety adviser and founder of Siskin Labs; T. Annie Nguyen, lead product designer and design researcher; and Jillian York, director for international freedom of expression at the Electronic Frontier Foundation. PEN America is also deeply grateful to the many journalists, writers, scholars, technologists, psychologists, civil society advocates, lawyers, and other experts who agreed to be interviewed for this report, including those who are not acknowledged by name. PEN America appreciates the responsiveness of the representatives at Twitter, Facebook, Instagram, and Google in our many exchanges, as well as the generosity and openness of the founders and staff of the many third-party tools we examined for this report.

Our deep abiding appreciation goes to the Democracy Fund and Craig Newmark Philanthropies for their support of this project. PEN America also receives financial support from Google and Facebook, but those funds did not support the research, writing, or publication of this report.

The report was edited by Susan Chumsky.