Suzanne NosselEvery Friday, we discuss tricky questions about free speech and expression as they pertain to the ongoing pandemic with our CEO Suzanne Nossel, author of the forthcoming Dare to Speak: Defending Free Speech for All, in our weekly PEN Pod segment “Tough Questions.” In this week’s episode, we talk about new actions taken by Facebook and Twitter to confront content regulation, and how effective and ethical these initiatives seem to be. Listen below for our full conversation (our interview with Suzanne is up until the 10:45 mark).

Facebook recently rolled out the leadership of their new oversight board. It’s designed to provide some kind of check on Facebook’s decisions about the content that it either takes down or leaves up. Is this a step in the right direction, or is this Facebook dodging accountability by trying to police itself?
It’s a bit of both, to be honest. There have been so many gaping questions about the regulation of online content, and there are so many areas of online content—whether it’s the advocacy of terrorism and terrorist recruitment, disinformation and misinformation relative to the pandemic, harassment, exposés, or pornography—that have sparked enormous debate over where the boundaries should lie and whether we want outlets like Twitter and Facebook to function as a digital equivalent of the public square, where the boundaries are very wide and you can say just about anything short of, perhaps, direct incitement to violence, which can be prohibited under the First Amendment.

Do we want these platforms to have a playing field for speech that is as broad-ranging and limitless as that, or do we believe these platforms owe a responsibility to society to regulate, control, and mitigate some of the very real harms and damage that we see from certain types of speech? So that’s this ongoing dilemma and debate that has been raging through policy-making organizations in the European Union and in other jurisdictions, social media companies themselves, and in Silicon Valley. A whole cadre of law professors and advocates in the world of digital freedom have been debating over how to draw the boundaries, and how aggressively we want social media platforms to police content.


“Having a group of people to deliberate on these questions [of how to regulate content]—who are a step removed from the profit motives that must govern a private company—is a positive step, but it also reflects the yawning gap in terms of regulation.”


This idea of the content review board germinated about 18 months ago, and it was the notion that you should create a group of people a step removed from Facebook’s management—with their concerns over building audience, profit, and the bottom line—and have those people bringing requisite expertise from the legal realm, the policy realm, technology, and issues like harassment and intimidation, coming together to deliberate these tough questions and decide where the boundaries ought to be drawn. I think there is merit to the notion that you want to draw in these types of expertise. Having a group of people to deliberate on these questions—who are a step removed from the profit motives that must govern a private company—is a positive step, but it also reflects the yawning gap in terms of regulation. Our regulators are nowhere near coming to grips with these questions.

Some people start to say this is an abdication of government, but there’s a flip side to that, which is do we really want legislatures around the world dictating to social media companies how they should police their platforms? In some countries, such as Germany, they do so far more aggressively. They have strict laws prohibiting hateful speech online, and companies can be fined steeply if they don’t remove hateful content from the platforms. And obviously, that derives from Germany’s particular history in terms of bigotry and genocide, so you can understand why they are stricter on those questions, but it also raises real free speech concerns when those governments are very assertive in policing online speech. We see a lot of false positives and that disenfranchised groups often are the ones who end up being accused of purveying the hateful content or bigotry. So there are problems with empowering the government to do this, and that’s where you get back to Facebook.

There was an idea that this should be an industry-wide initiative, and you should have a body of outsiders that would perform this function—not just for Facebook but kind of collectively for Facebook, Google, YouTube, Instagram—and it was pretty clear that was not going to get off the ground anytime soon. So Facebook has gone ahead, and they’ve put this together. It’s a very distinguished group of people who really do have significant expertise in these areas. I basically think we should give them a chance and see what they come up with. I think the rubber is going to hit the road when they want to wade into an issue that Facebook’s management doesn’t want them to have their hands on, or when they render a decision that really runs against the company’s corporate interest. And the question then will be, will Mark Zuckerberg, as he has said he would, heed to the directives of this body, or are they going to be pulled back or reigned in in some way? And if so, how do they then react? Do they put up with that or not? And I think they all feel it’s a prestigious organ to be part of, and they will be somewhat invested in trying to make this thing work.


“When you speak, you need to think through where your words are going to land, what kind of audience they’re going to reach, and who you might not be considering or thinking of, but who might see your words and interpret them in a very different way from what you intended.”


Twitter announced this week they would test a mechanism that sends users a prompt when they attempt to reply to a message using offensive or hurtful language. Do you think that this is a threat to speech, or is this a smart way to prevent online hate?
It may come as a surprise for somebody who’s running a free speech organization, but I actually like this idea. In my book Dare to Speak: Defending Free Speech For All, which comes out in July, my first chapter is about the idea of being conscientious with language, and that when you speak, you need to think through where your words are going to land, what kind of audience they’re going to reach, and who you might not be considering or thinking of, but who might see your words and interpret them in a very different way from what you intended. And that’s certainly true on Twitter, where you may tweet to your own followers, but of course they can retweet, and your tweets can land far and wide all over the world, reaching audiences that you never dreamed of. The idea that conscientiousness should be baked into the system, and that if you’re using a nickname or something that may be considered a slur in some contexts—that’s something to be aware of, and that the system might alert you to.

It’s not mandatory that you heed that alert. You can ignore it, you can swat it away. I also think if people decide that they’d rather not see these alerts, that should be an option. You shouldn’t be forced to be confronted with this. But who doesn’t write the occasional email or perhaps social media post in a moment of anger or frustration, or a great joke comes to mind, but then you think the better of it? You realize somebody might read it the wrong way, or if this was read by somebody who’s in a different age, ethnic, or gender group, they could misunderstand the intent in writing this, and then you pull it back. And that’s not censorship. That’s part of being a responsible person living in society and thinking through how you’re wording things, who you’re speaking to, how well you can control where your ideas may travel, and whether you’re putting things in as careful and thoughtful as a way as you want to.


“Part of being a responsible person [is] living in society and thinking through how you’re wording things, who you’re speaking to, how well you can control where your ideas may travel, and whether you’re putting things in as careful and thoughtful as a way as you want to.”


I think Twitter’s notion that they can offer a tool that, at least with certain language triggers, can prompt you to perhaps think twice, lets you know that a word that you never realized was a slur actually is considered a slur, maybe in some parts of the country or the world. In my book, I talk about a lot of examples where people say something that, from their perspective, is perfectly innocent, and they have no idea about a double meaning.

There’s one famous incident where somebody was talking about the basketball player Jeremy Lin and the article was headlined “a chink in the armor,” and it was incredibly offensive to Chinese people because, of course, “chink” can be a slur; it’s known as that in a longstanding way. But it’s also a legitimate phrase. “A chink in the armor” has a very different meaning, which is an indentation in a wall. The writer had no thought of this thing in relation to Jeremy Lin. That was the furthest thing from his mind. If he had had a little warning from Twitter in a post like, “Hey, do you really want to use the word ‘chink’?” he might’ve caught this and avoided an enormous amount of grief and, in fact, losing his job.


Send a message to The PEN Pod

We’d like to know what books you’re reading and how you’re staying connected in the literary community. Click here to leave a voicemail for us. Your message could end up on a future episode of this podcast!

LEAVE A MESSAGE