Diagram of person icons and crossed out messages pointing at each other

As part of our effort to #FightOnlineAbuseNowwe’re publishing a series of pieces about the harm online abuse poses to free speech—but also what Facebook, Twitter, and other social media companies can do to blunt its worst effects.

PEN America is calling on social media platforms to create new features that better protect users from abuse: a shield that automatically detects and silos toxic content and a dashboard that provides a designated space for users to navigate (or ignore) such content.

Viktorya Vilk headshotThe shield would proactively identify abusive content in feeds, direct messages, and notifications; filter the content out; and quarantine it in the dashboard. Users could visit the dashboard to review this toxic content and decide if they want to release it, block or mute it, document it, or report it. Within the dashboard, users could also activate a rapid response team to help manage the abuse, take steps to tighten their privacy and security, and access additional in-platform support and external resources.

For more about the shield, we spoke with Viktorya Vilk, PEN America’s program director for digital safety and free expression.


How did you come up with this feature?
VIKTORYA VILK: Over the past three years, we’ve been traveling the country working with news outlets, publishers, and professional associations to offer training and support to writers and journalists facing relentless online abuse.

What we realized is that writers and journalists often find themselves in a catch-22. Many of them are subjected to a lot of abuse, which can take a serious toll on their mental health and cause self-censorship. So they need to be able to reduce or control their exposure. But they also need to be able to review abusive content in case it escalates. For example, if someone publishes their home address or threatens physical harm, they need to know that’s happening.

People kept saying to us, “I need to use social media to do my job, but I don’t want to have to see these toxic comments and messages all the time.” We started talking to technologists and other experts about what it would mean to treat online abuse like toxic spam. Is it technically feasible to proactively and automatically detect hateful and harassing content? Has anyone tried to do this? What worked and what didn’t? And what’s amazing is that finally, the technology is catching up to the need. It’s now possible to actually build shields around individual accounts and dashboards for managing abusive content.

Is anything like this already available?
VILK: There are brilliant folks experimenting in this space, and we’ve been inspired by some very promising tools, like Block Party, Perspective, Coral Project, and Sentropy. We’ve actually helped test and provide feedback on a number of these tools. The shield and dashboard, as we envision them, integrate the most promising ideas we could find with what vulnerable users have repeatedly told us they need, all built into the platforms themselves.

Your PEN America report, No Excuse for Abuse, is aimed at the big social media platforms like Facebook, Instagram, and Twitter. Did you talk to them, and what kind of response did you get?
VILK: Yes, we’ve consulted with them throughout the research process. We at PEN America and many other organizations working on combating online abuse have been talking to the platforms about tightening their rules and improving their products for years. To be fair, we’ve seen some progress recently, with platforms revising their policies on hate and harassment and improving features like blocking and muting, but it’s nowhere near enough and it’s taking too long. Many of these platforms have been around for over a decade, and they continue to prioritize engagement and monetizing attention over safety and equity. The platforms are simply not investing enough resources, specifically staff time and money, to make products safer for their most vulnerable users. Nor are they spending enough time designing with and for their most vulnerable users. Honestly, they could build or revamp many of the features we propose tomorrow if they wanted to.

What makes this report different or unique?
VILK: This report is the first to look at online harassment from the standpoint of user experience and product design and ask, “What can platforms do now to protect and support people facing abuse?” We’ve seen some excellent reports in recent years offering recommendations for how platforms can revamp content moderation and reform the algorithms that control what we see in our feeds. We’ve also seen compelling reports on the need to rethink the entire business model that underpins social media, which is rooted in maximizing engagement to monetize attention. Those kinds of big-picture reforms are critically necessary, but they take time, and people are suffering now. So in this report, we set out to understand how social media companies can improve the design and functionality of their platforms to tackle abuse.

What do you hope will come out of this report?
VILK: Our goal was to provide technology companies a blueprint of tools and features they can and should build ASAP to combat online abuse and empower targeted users. We are drawing on years of experience and over 10 months of research, including a comprehensive literature review and interviews with nearly 50 writers, journalists, researchers, lawyers, and experts in fields ranging from psychology to technology. We set out to understand the experiences and needs of users who are disproportionately targeted online for their identities and professions, especially writers and journalists, and come up with concrete, actionable solutions.

If platforms are serious about addressing abuse—which overwhelmingly affects women, Black, Latino, Indigenous, LGBTQ+ people, and those belonging to religious and ethic minorities—then it’s time to step up. We recognize the recent progress platforms have made, but it’s sporadic and piecemeal. What we’re saying is, the time has come to invest resources and time to build out anti-harassment tools and features in a concrete and deliberate way. Hate and harassment are driving people off of platforms. If companies make their platforms safer and more equitable, they’re not only doing the right thing, they’re expanding their audience.