Matt Bailey headshotIn just a few days, the debate over content moderation on Facebook and Twitter has morphed into a full-blown campaign issue. After the platforms took action late last week to slow the spread of a dubious New York Post story, conservatives are now crying “censorship,” and the debate over how information spreads during the campaign is front and center in the final two weeks until Election Day. To help us sort through some of the big free speech issues at play, Matt Bailey, PEN America’s digital freedom program director, joined us on The PEN Pod to discuss how social media platforms are combatting election disinformation, whether or not these policies work, and his own concerns about public discourse for the upcoming election. Listen below for our full conversation (our interview with Matt is up until the 8:24 mark).

I realize it’s been a few days, but this issue doesn’t seem to be going anywhere. What do we know so far about why platforms took the steps that they did? Is it just that they’re afraid of a 2016 repeat?
That’s a good question. I think there’s a lot of layers here, and that we can start with kind of taking the platforms at their word. We’re talking about Twitter and Facebook here, and they both actually took very different actions almost at the same moment. So, what you have is Twitter basically blocked linking to the New York Post article, and the pretext that they used for that was an existing policy against hacked materials.

They basically said, “Look, we’re not adjudicating, whether this is good reporting, whether that’s good for America, whether it’s a free speech issue—we don’t allow hacked materials to be amplified on our platform.” In this case, they were talking about the emails, and they’re particularly concerned with some of the private information that was included in the emails, so they took that action.

Facebook nearly concurrently decided that this was potentially an issue for fact-checking, referring to some of their fact-checking networks that they partner with more broadly, and they made a decision—apparently as far as we can tell—to turn off amplification of posts.

They’re actually still allowing people to talk about it, to post links to it. What they weren’t doing is essentially pushing it to the top of your feed. In both of these cases, what you have is a decision—not so much to censor in any very simple way the content, but instead to limit its reach or to otherwise kind of hobble it as it links back to the New York Post.


“In some ways, it looks like a win to me. I’ll say this kind of at this early moment for [Facebook and Twitter], because I think showing that they took decisive action—and that it still went viral or that it still got so much traction—kind of proves that point. I would say that one thing that we appreciate at PEN America, as we sort of kicked it around as a team, is we appreciated the approach that both of them did take in seeking to limit the spread, but still preserve the space for discourse.”


What’s really strange in both of these cases is there’s a sense that both of these companies were making a decision that maybe had some other thinking behind it, some political calculus, or perhaps even just some basic consideration about right or wrong during an election season.

Just to finish the story in both cases, you have a very interesting divergence here. Twitter’s CEO, Jack Dorsey, actually got on a number of hours later—this is still on a Friday—and actually said, “Hey, we screwed up. We should have been more proactive in communicating about what we did.”

And then later, they actually turned off their restrictions, which is something we can go into more later if we want to. Facebook, interestingly, still to my knowledge, hasn’t communicated about this publicly in any sort of official way. What we do have is an official from Facebook’s communication team going on their Twitter account and talking about this decision. We have, like I said, a lot of layers, and it’s really interesting trying to kind of reverse engineer what the thinking was within these teams.

If the purpose here was to limit the spread of something that may or may not be just information, it seems like the Barbara Streisand effect—it seems like going to all this trouble, making all of this mess, and everyone’s heard this story now. What was the point? Did it really work? Did it really have its intended effect? What does that say about what the platforms can and can’t do to control disinformation on their sites?
Yeah, that’s right. Rightly or wrongly, I think you’re kind of making their point of them, which is to say, there is this very convenient political narrative. I’m speaking kind of like Twitter or Facebook that says, “If you guys could just get all of this hate speech and false news under control, the world would be perfect, and democracy would work again.”

And the argument that they keep making is, “Look, we’re doing everything we can, but the problem predates us and is bigger than us. So, I don’t know.” I think this will continue to play out, and there’s a real question of kind of just how much political traction the parties are going to get from this.

But in some ways, it looks like a win to me. I’ll say this kind of at this early moment for them, because I think showing that they took decisive action—and that it still went viral or that it still got so much traction—kind of proves that point. I would say that one thing that we appreciate at PEN America, as we sort of kicked it around as a team, is we appreciated the approach that both of them did take in seeking to limit the spread, but still preserve the space for discourse.


“I think that the demand kind of from the public, from advocacy organizations like ours going forward, has really got to be one of transparency and clear communication in advance of what the standards are, and in advance of what the emergency response procedures are. Because otherwise, there’s no way to kind of get a hold on how do these platforms interact with our democracies? Interact with our elections?”


So, philosophically, it’s the right approach. The question as you framed it is the big one, and I don’t think we’ll know for quite some time. Did they have the effect that they were after? Should they have done more? Should they have acted even more aggressively? These are things that, first of all, we may never know, and secondly, they’re going to take some further study.

I saw a tweet today about how someone did the math about just how much cable news coverage this all got, and it was many, many hours into the weekend. This seems to go to this idea of which is the bigger perpetrator here—is it the platforms, the cable news, or is it some combination of both? Another thing that floats in my mind is we’re two weeks away from the end of the election season effectively, and millions of people are voting as we speak. How have platforms still not really gotten a handle of this? At one point, we heard Facebook say, “We’re not making any changes to our policies during the heat of the campaign season,” and yet, they did make those changes. Why is this now happening?
No, it’s a mess. I think this is frankly—this is the big lesson from my point of view. It is not so much, did they intervene the way they should have? Too weakly? Too strongly? Did they communicate it well enough? The real problem is that the policies for both of these platforms and many of their peers are just a mess.

If you try to understand what actually is Twitter’s policy on something like this, at any given point in time, it seems to be—I’ll be polite and say a moving target. I think that the demand kind of from the public, from advocacy organizations like ours going forward, has really got to be one of transparency and clear communication in advance of what the standards are, and in advance of what the emergency response procedures are. Because otherwise, there’s no way to kind of get a hold on how do these platforms interact with our democracies? Interact with our elections?

The reason that I find this sort of action—two weeks out from an election, like you point out—so troubling is because, to some extent, it’s the election. I’m sort of generally alarmed about the election these days. But, what I’m really alarmed about is that this feels like an appropriate time to sort of innovate—if you want to put it that way—on these sorts of very, very impactful interventions in our public discourse.