The PEN Pod: On Making a Safer Internet with Suzanne Nossel
Every Friday, we discuss tricky questions about free speech and expression with our CEO Suzanne Nossel, author of Dare to Speak: Defending Free Speech for All, in our weekly PEN Pod segment “Tough Questions.” In this week’s episode, Suzanne speaks about the implications of monitoring social media content, what may happen if the Biden administration pursues breaking up Silicon Valley, and a Georgia lawsuit where one dollar was sought in damages. Check out the full episode below (our interview with Suzanne begins at the 12:58 mark).
The New York Times reported this week that despite a lot of promises on the campaign trail from both President Biden and former President Trump, Congress now seems less inclined to revoke Section 230 of the Communications Decency Act, which shields Facebook, Twitter, and others from legal action for content on their platforms. It seems lawmakers might be going for a more piecemeal approach, maybe targeting paid content rather than all content protections. Is this a walk back by the Biden folks, and is this realistic in terms of protecting free speech while also stemming disinformation—which is a real threat?
I don’t think anyone really believed they were ever going to repeal Section 230 entirely, although they did say that. It was always clear to those who have any knowledge on this issue that that was unrealistic, too extreme, would hobble the internet, and even the most serious experts who address the harms of online content really haven’t advocated that wholesale repeal.
While that language was used, I don’t think anyone really credited it, and most people expected things to proceed more or less as it seems to be happening, which is an examination of a set of proposals for how to pare back and reform Section 230. This is what I always expected would happen, and it was clear that the regulatory momentum was building. There was some bipartisan support—there were these examples from Europe and elsewhere that American legislators were paying closer attention to—so things are proceeding more or less as expected.
There are a number of proposals on the table. The one that you reference is called the SAFE Tech Act—it’s been introduced by Senators Hirono, Warner, and Klobuchar—and it does target paid content. I think the thrust of it is intended to be paid advertising on the platforms, although some critics are arguing that the way it’s currently worded could be construed as broader than that, and to encompass any content that is linked—to a paid transaction for example—with an ISP or a server provider so the language may not be perfect as is. I think the idea of trying to hone in on advertising and paid content is a good one. It’s something that we supported in our report on disinformation.
“There’s the idea that there should be a higher standard that is applicable to platforms when they’re in a client-to-advertising-platform relationship. That it’s fair, in that instance, to accept and impose a higher burden of due diligence, and you ought to know who you’re doing business with. Who’s paying you? Is there a beneficial owner somewhere hidden in the shadows? What is the message?. . . There are rules of that sort for print advertising. . . so the notion of extending that to the online realm has been on the table for some time, and I think it’s overdue.”
There’s the idea that there should be a higher standard that is applicable to platforms when they’re in a client-to-advertising-platform relationship. That it’s fair, in that instance, to accept and impose a higher burden of due diligence, and you ought to know who you’re doing business with. Who’s paying you? Is there a beneficial owner somewhere hidden in the shadows? What is the message? Is the message one that is misleading or deceptive? There are rules of that sort for print advertising in terms of—for example, pharmaceutical ads and the claims they can and can’t make—chief advertising laws that apply, so the notion of extending that to the online realm has been on the table for some time, and I think it’s overdue.
Whether this particular draft needs to be further honed—that’s possible that it’s not perfectly constructed—I think this is a pretty reasonable place to start. One reality of all of this is that no one really knows how the reform of Section 230 would play out, and there’s a lot of doomsday purveyors who argue that any change to the liability shield for the platforms is going to destroy the internet as we know it and the platforms will go way overboard in terms of suppressing content, deleting content for fear that they could be held liable for it, that it might really gum up the works in that platforms, website, and blogs would have to undergo legal review before they could be posted—interfering with the real-time immediacy that the internet offers us.
There are a lot of worries out there. I don’t think they’re all baseless, I don’t think they’re all well-grounded. The fact is we don’t know a lot about how changing the fundamental gear structure that underpins the internet—including Section 230—is going to shift the incentive structure and what the behaviors of platforms and users are going to be in the wake of that. That is inherently something of an unknown. You can predict, extrapolate, speculate, but until we implement some of these changes, we won’t actually know.
I think we have to look at this as something of an experimental phase, and I would support adopting some of these reforms on a sunset basis. Let’s try them for a year, for 18 months, for two years, and see how they work out and make sure we’re studying that, that researchers have the resources, access, and transparency from the platforms needed to really be able to tell us what changes ensued as a result of these reforms.
“I think we have to look at this as something of an experimental phase, and I would support adopting some of these reforms on a sunset basis. Let’s try them for a year, for 18 months, for two years, and see how they work out and make sure we’re studying that, that researchers have the resources, access, and transparency from the platforms needed to really be able to tell us what changes ensued as a result of these reforms.”
There’s two things at play here now because there’s the lawmakers in Section 230 and that route, but then we’re also seeing this week the Biden administration indicating that it’s actually bringing in two tech experts who have been big critics of Silicon Valley into the administration, who have at times potentially advocated for breaking up the big tech companies. Does that in any way protect free expression?
Not necessarily. The reasons that antitrust enforcement is on the table—with respect to big tech companies—it’s not just social media platforms. It’s Google, the search engine, and Amazon, which now controls such a vast dominion over our e-commerce, so the concerns that have driven that movement are not really principally about content itself. They’re about anti-competitive behavior. They’re about Facebook’s track record for swallowing up hundreds of different potential competitors and being able to then control not just so much of our discourse in terms of sharing and communication that people do on the platform, but all kinds of vertical integrations or other types of apps and services both for businesses and for consumers that contribute to the backbone of the internet.
They’ve kind of just sprawled in so many different directions, and the same is true for each of those companies. The fear is that Silicon Valley has become just locked up in a few hands, and that is going to impede innovation and change because these companies will be complacent. That’s going to disserve consumers because there’s not enough competition for people to offer better incentives or pricing to consumers. Those are the concerns that are driving this move toward potential antitrust enforcement.
There is a feeling that one thing that compounds the problematic propensities for content online—things like disinformation, online harassment, this spread of terrorist recruitment, COVID-related hoaxes, conspiracy theories, the various harms that we associate with online content—people feel that they are worse because of the small number of platforms that control so much content. If you have a QAnon conspiracy theory that gets seeded on Facebook, it can propagate so widely, because Facebook itself is so vast.
“There is a feeling that one thing that compounds the problematic propensities for content online—things like disinformation, online harassment, this spread of terrorist recruitment, COVID-related hoaxes, conspiracy theories, the various harms that we associate with online content—people feel that they are worse because of the small number of platforms that control so much content. If you have a QAnon conspiracy theory that gets seeded on Facebook, it can propagate so widely, because Facebook itself is so vast.”
It’s also true that when it comes to, for example, online harassment, Twitter is a real locus in this cesspool of misery for journalists and writers, and it’s an issue that we work on extensively at PEN America, but I don’t think anyone’s accusing Twitter of being a monopolist. It’s not clear to me the most obvious solutions for breakup, which would attack these companies vertically rather than horizontally. I don’t think they would say to Facebook, “Of your billions of users around the world, you’ve got to cut that group of people in half, and half of them go to the Face platform and half of them go to the Book platform and the two can never meet.” I don’t think that’s going to be the regulatory answer here, and without that, it’s not clear that the content-related problems that we are most concerned with really have resolution through an antitrust channel.
I’m going to ask you to put your lawyer hat on a Supreme Court decision this week that overturned two lower court rulings. They said that a Georgia student could seek damages after his school prevented him from handing out religious literature. The school actually walked the decision back, the student is no longer a student, and it doesn’t seem the student has suffered economic damages. Why is it important that the court ruled this way?
It’s sort of a strange ruling. Ultimately, it didn’t really hinge on a free expression issue—although the underlying case and fact pattern, about distribution of literature on a campus, is certainly a free expression issue—so the claim that the university made basically was that the case was moot, because the action had been reversed, and essentially the conduct was over with and not ongoing, and normally in the courts there’s a requirement of a live case or controversy. So if a matter is mooted due to any number of circumstances—it could be because somebody died, because somebody left office (for example, the case of our own lawsuit against Donald Trump that was moot after he left office)—there are all kinds of reasons that the conduct can just come to an end, and that can moot out a lawsuit.
The ruling in this case was that because the plaintiff had a claim for nominal damages—he sought one dollar in sort of a symbolic damages from the university—that claim was a basis upon which the lawsuit could proceed, that as long as you had that damage claim in there, the lawsuit was mooted out by the change in conduct. Although, there was sort of a caveat, that if the university had simply paid the dollar, then the whole case would have ended. Sort of a strange case in the ruling, but not one that ultimately has particular implications for free speech.