Viktorya Vilk headshotFor years now, we at PEN America have been raising the alarm about online abuse and the threat it poses to free speech. We know that online hatred, from vicious name-calling to death threats to doxing, has serious impacts on the health and well-being of writers, journalists, and all users. It’s also putting a stranglehold on free expression. Earlier this week, we published a report proposing some major changes we think Facebook, Twitter, and other platforms need to make to shield users from the harshest effects of online abuse. This week on The PEN Pod, Viktorya Vilk, PEN America’s program director of Digital Safety and Free Expression and lead author of the report, discusses how online abuse directly affects freedom of speech as well as the different ways social media companies can stem online abuse and support its abused users. She also explains the importance of tackling online abuse from all sides and how everyone must become involved in the multifaceted fight to halt online abuse. Listen below for our full conversation (our interview with Viktorya is up until the 12:14 mark).

Viktorya, I want to get to the recommendations in just a minute, but I want you to first sketch out how you got to writing this report.
Over the past three years, I have led PEN America’s online abuse defense program, and my coauthors and I—between the three of us with this report—have worked with thousands of writers, journalists, artists, and activists who are dealing on a daily basis with death threats, rape threats, hateful slurs, impersonation, and every other abusive tactic under the sun. We got really fed up.

For too long, the burden of online abuse has fallen on the shoulders of the people who are on the receiving end of it, and those folks are almost always women, people of color, LGBTQ folks, and members of religious and ethnic minorities. The goal of the research that ended up turning into this report was to find solutions. We decided to focus on social media platforms, especially Twitter, Facebook, and Instagram, because we know that that’s where people are experiencing the most abuse online. Those are also the platforms that people increasingly rely on to do their jobs, to make their voices heard.


“The recommendations [of our report] essentially focus on the design of the platforms themselves from a standpoint of the people who are experiencing abuse. We asked ourselves, what challenges are people facing on a daily basis? What kind of tools already exist? Do they work? Don’t they work? Where are the gaps? How can platforms build features that empower people who are on the receiving end of abuse and their allies and deter abusers?”


We centered our research on the experiences and needs of people who are disproportionately targeted for their identity and their profession with the idea that if the platforms can better protect their most vulnerable users, they can better serve all of their users. The recommendations essentially focus on the design of the platforms themselves from a standpoint of the people who are experiencing abuse. We asked ourselves, what challenges are people facing on a daily basis? What kind of tools already exist? Do they work? Don’t they work? Where are the gaps? How can platforms build features that empower people who are on the receiving end of abuse and their allies and deter abusers?

Getting into some specifics, the report doesn’t say, “Facebook do X, Twitter do Y Instagram do Z.” It is a little bit more principal than that, but you are sketching out some fairly specific product design changes that you think need to be made. What are some of the highlights?
Our recommendations fall into three buckets. There are proactive measures, how to protect people before abuse even happens; there are reactive measures, how to help people defend themselves when they’re under attack; and there are accountability measures, how to disincentivize people who are abusing others. We’ve got 15 recommendations. I’m not going to talk you through all those now; I’m just going to highlight a few. For example, nobody should have to see every hateful slur and sexist insult that is lobbed their way. What we want platforms to do is to treat online abuse like spam. We want them to proactively filter it out so that the people on the receiving end don’t have to see it. They then can quarantine it in a kind of toxic holding area where they can review it and address it if, and when they need to. That’s just one idea.

Another example: People who are facing extreme or overwhelming abuse on social media—let’s say they got a death threat, or they’re being attacked by a cyber mob because, oh, I don’t know, some right-wing media personality put a target on their back. For those folks, it can be incredibly overwhelming to get hundreds or thousands of messages an hour that are cruel and hurtful and harmful. Those folks need personalized trauma-informed support in real time. We want platforms to create an SOS button that people can tap and instantly activate to trigger more protections, to trigger more support.


“The reason is that the business model that underpins social media is basically predicated on maximizing user engagement and attention, and that prioritizes virality, motion, and immediacy, but it doesn’t prioritize safety. In fact, all of those things actually amplify abuse. That’s why we’ve gotten where we are today, but in terms of what we can do to actually get the platforms to make changes, I am actually cautiously optimistic.”


I’ll give you just one more example: Nobody should have to deal with online abuse on their own. Right now, if you are under attack and you want to ask a friend or a family member or a colleague or a security person at your office to help you, you have to give them the password to your account—which isn’t safe. What we want platforms to do is we want to be able to assign trusted allies to a rapid response team, and then give those allies permission to access certain aspects of their accounts so that those allies can help them document, block, report, and mute abuse so that they’re not dealing with it on their own.

Those are just a couple of examples, and if they sound like no-brainers, it’s because they are. What we realized is that platforms should have put these measures in place years ago, and they haven’t yet.

Why haven’t they done this? If it’s so obvious, if it’s so smack-yourself-on-the-forehead obvious—which it sounds like it is—why is there not an incentive here for them to move?
What I found remarkable in this process is that for an industry that’s known for its propensity to “move fast and break things,” social media companies have been painfully slow to address online abuse. I’ll give you just one example because it’s so shocking. Twitter did not integrate a button to make it easier to report abuse until seven years after it first launched, which kind of tells you a lot. The reason is that the business model that underpins social media is basically predicated on maximizing user engagement and attention, and that prioritizes virality, motion, and immediacy, but it doesn’t prioritize safety. In fact, all of those things actually amplify abuse. That’s why we’ve gotten where we are today, but in terms of what we can do to actually get the platforms to make changes, I am actually cautiously optimistic, and there are a couple of reasons.

First of all, I think having the reputation of being a toxic cesspool is bad for business. When celebrities like Chrissy Teigen are publicly leaving your platform, when corporations are pulling ad money, when governments are threatening regulation, that’s not a good sign. It’s not good for your business. Second of all, platforms have been loudly proclaiming for years that they are not only committed to creating safer spaces online, but that they want to really prioritize equity and inclusion. If that’s true, it’s time for them to put their money where their mouths are and to invest staff time and resources into building better tools, to protect people who are disproportionately impacted by abuse on their platforms.

All that said though, the only thing that really seems to work is pressure and attention from the public, from civil society, from industry, from legislators. I think all of us collectively have to keep up the pressure and make a lot of noise.


“What [many people] fail to see is that online abuse is deliberately deployed to stifle and silence speech. You cannot have free speech, you can’t practice free speech, when abusive trolls threaten your family and publish your home address online. It becomes kind of difficult. That’s kind of the starting point. You have to understand that online abuse in and of itself threatens free speech, and it threatens press freedom.”


On this idea that the platforms move fast and they’re risk-taking—all these things that I think the platforms love for us to think about them. Another feature has always been this sort of civil libertarian streak, that they are the place for free expression and free speech. That’s something that we are concerned about, and I know one of the defenses that the CEOs often raise as well: We don’t want to limit free speech on the platforms, therefore all of these proposed solutions about reigning in online abuse—or even reigning in disinformation—this universe of problems that we’re all encountering don’t hold water. What do you say to that?
It’s a very worthwhile thing to think through. It’s a very good question. I think the problem I have with it is that people often position online abuse and free speech as some kind of false dichotomy. What they fail to see is that online abuse is deliberately deployed to stifle and silence speech. You cannot have free speech, you can’t practice free speech, when abusive trolls threaten your family and publish your home address online. It becomes kind of difficult. That’s kind of the starting point. You have to understand that online abuse in and of itself threatens free speech, and it threatens press freedom.

That doesn’t mean that there are not tensions between protecting free speech and fighting online abuse, and I’ll give you just one example to give you a sense of what I mean. For example, whether or not something can be perceived as harassment can depend on the context. A slur can be used as hate speech, but it can also be reclaimed as a form of empowerment. We know that content moderation is kind of imperfect. There’s a lot of implicit bias. If your only solution to the problem of online abuse is just blanket ban content, to remove it off the platforms, you might end up further harming the very people that you’re trying to protect.

In this report, we took at face value that the tensions and the challenges are real. We tried very hard to think through unintended consequences, how tools and features could be weaponized, where things could go wrong, but too often, people and platforms use the complexity of the problem—the very real challenges—as an excuse to throw up their hands and give up. They hide behind the free speech argument, and we’re here to say that that’s disingenuous because we have 30-odd pages of recommendations, and we’re a free speech organization. We have 30-odd pages of recommendations, very concrete things that can actually be done.


“Hate and harassment did not emerge with the rise of social media. . . These are the things that are happening already in society that are being amplified. I think if we’re serious about fighting online abuse, everybody’s going to have to get involved. We need to train people to defend themselves. We also need allies to step in and intervene when they see people under attack online. We need employers to do more, to protect and support staff.”


This report goes through a lot of product design changes that the platforms should do. You don’t think it should end there, right? You think there’s a broader set of solutions. Where else should we be looking to try to reign in the effects of online abuse?
I absolutely don’t think this is just about product. I think the problem is so enormous at this point. Pew came out with a study recently that said that almost half of Americans have experienced online abuse and almost two-thirds have witnessed it, so that gives you a sense of how big the problem we’re looking at is.

We have to tackle it from all sides. Our report focuses on product changes, but different types of platforms have to reform their policies. They still have to tighten loopholes in their policies that allow certain kinds of hate and harassment to happen. They have to completely rethink how they tackle content moderation—and there’s some very good research out of NYU on that—but in order to get platforms to do that kind of stuff at scale, we are going to have to rely on some regulatory reform. It’s critically important. It’s just that it takes time, and it’s hard to get right.

We decided to focus on the low-hanging fruit—the stuff that the platforms literally have no excuse not to fix and not to act on. But I might even zoom out a little bit more and say that it’s important to understand that hate and harassment did not emerge with the rise of social media. My coauthor Matt Bailey likes to call the internet an “amplification machine.” These are the things that are happening already in society that are being amplified.

I think if we’re serious about fighting online abuse, everybody’s going to have to get involved. We need to train people to defend themselves. We also need allies to step in and intervene when they see people under attack online. We need employers to do more, to protect and support staff. So I think it’s kind of an, all of the above. Talk about the platforms, but also talk about what each and every one of us can do to tackle this problem.