Three and a Half Ways Not to Fix The Internet
Following on the heels of President Trump’s Executive Order On Preventing Online Censorship and Joe Biden’s earlier call to revoke Section 230, recent days have seen no fewer than four proposals for reform of 47 U.S.C. § 230. Section 230 is the federal law that limits liability for digital platforms like Facebook and Twitter when it comes to content posted on their sites. Debate over the 1996 law has come to the forefront in recent years, with growing calls for increased accountability for the social platforms. Thirty-five percent of Americans report experiencing online harassment on the basis of sexual identity, race, or religion in the last year; clearly, the need for action to protect everyone’s ability to safely participate in our digital forums is real. Yet, Section 230 is critical infrastructure for an open internet and free expression, and most of the recent proposals rush to tear it down to the detriment of all. You don’t address highway safety by blowing up the roads.
Here’s a summary of last month’s proposals—three from senators, and one from the Justice Department—and what they could mean for free expression online.
On June 17, a trio of Republican Senators— Josh Hawley, Marco Rubio, and Tom Cotton —introduced the Limiting Section 230 Immunity to Good Samaritans Act.
Hawley’s legislation seeks to create a new layer of responsibility for the largest platforms to provide transparent and ‘good faith’ content moderation. Citizens who believe they have been unjustly censored, perhaps on partisan or ideological grounds, would be able to sue for damages.
There are several difficulties here. The first is the challenge of determining what qualifies as “bad faith.” How could any one decision be categorized as such relative to all other moderation decisions, at global scale? In the case of supposed “shadow bans” or deprioritization of content, how would users be expected to know? The Act doesn’t contemplate these questions. Instead, it creates a potent engine for discrimination theater in which individuals can bring (or be encouraged to bring) individual lawsuits against platforms in order to create political pressure that serves the agenda of the day. For example, these suits could be mobilized to push the unsupported narrative that conservative voices are being actively silenced by the platforms. But the mere threat of an adverse ruling could create perverse incentives for the tech platforms to avoid moderation of even very problematic content, such as aggressive harassment, where there is no specific court order or clear legal brightline to fall back on. As America turns toward the elections in November, disinformation and trolling pose a real threat, and Hawley’s legislation would provide a powerful new mechanism for political interests to, as Steve Bannon has said, “flood the zone” with more of it.
The very same day, the Department of Justice (DOJ) published a report, Section 230 — Nurturing Innovation or Fostering Unaccountability?, presumably outlining the administration’s preferred approach to reform.
DOJ’s proposals are a bit of a grab bag. Taken as a whole, they would be more damaging and far-reaching than those contemplated by Senators Hawley, Cotton, and Rubio. DOJ does highlight several legitimate concerns related to 230 reform—antitrust and competition in the marketplace of social media platforms, the enduring implications of the Stratton Oakmont ruling in the absence of Section 230, and opportunities to clarify vague statutory language, requiring de minimis reporting mechanisms to be made available to users, for example. However, many of the recommendations provided could have unintended consequences for free speech, either by incentivizing over-moderation of content by risk-averse companies or by creating openings for politicized regulatory actions. In some cases, the recommendations blur the lines between formal adjudication of content as illegal by law enforcement and the courts and an evaluation by the platforms of the likelihood that content might be illegal, outside of a legally accountable process.
Most importantly, DOJ recommends withholding 230 protections from providers who do not “maintain the ability to assist government authorities to obtain content (i.e. evidence) in a comprehensible, readable, and usable format.” This requirement would at minimum entail the destruction of secure, private communications among all users, whether through the creation of DOJ’s long sought encryption “backdoors” or the wholesale removal of end-to-end encryption on platforms such as WhatsApp and Signal. Perhaps a sign of the desire to push this hugely damaging agenda through with as little public scrutiny as possible: This recommendation, which appears as “Carve-Out for Actors Who Purposefully Blind Themselves and Law Enforcement to Illicit Material“ in the PDF version of the report, is entirely missing from the nearly identical list provided on the linked webpage.
On June 25, Georgia Senator Kelly Loeffler introduced a bill entitled “Stopping Big Tech’s Censorship Act,” which echoes aspects of both the DOJ’s recommendations regarding “bad actors” and Hawley’s “good faith” requirements in its “First Amendment Requirements.” Under these requirements, eligibility for 230 protections would be contingent on “viewpoint neutral” content moderation.
All three of these proposals would gut Section 230 and jeopardize a free internet by creating conditions under which platforms would be incentivized to aggressively over-moderate content in order to avoid the risk of losing 230 protections. This echoes the damaging strategies of SESTA-FOSTA and Germany’s “NetzDG” legislation. Among the unintended consequences of SESTA-FOSTA have been the wholesale removal of transgender creative forums and accounts with even a whiff of a connection to sex work or even sexual themes. These new proposals, however, would have much further reaching consequences of this kind, with severe consequences for free expression online.
A rather different and bipartisan contribution to the debate comes from Senators Brian Schatz (D-HI) and John Thune (R-SD), who on June 24 introduced the Platform Accountability and Transparency (PACT) Act. This proposal would leave Section 230 essentially untouched and focuses instead on platform transparency, accountability, and responsiveness to users. Like Hawley, Schatz and Thune would increase the compliance burden primarily on larger platforms, requiring less of those with under one million monthly visitors and $25 million annual revenue.
Like Hawley, Loeffler, and the DOJ, Schatz and Thune see a need for more transparency, requiring the publication of “Acceptable Use” policies and standardized quarterly content moderation transparency reports. While the other proposals try to legislate intent, their focus is primarily on complaint and appeal mechanisms. Repositioning content moderation as a consumer protection issue, PACT pairs its transparency requirements with detailed rules by which platforms would be required to allow users to complain about content and appeal decisions regarding removals. They require, for example, review of potentially violative content within 14 days, and the creation of complaint call centers. Where PACT would amend 230 is with regards to platform liability for illegal content, which would have to be removed within 24 hours of formal notification backed by a court order. Here again, PACT sidesteps the creation of perverse incentives that could lead platforms either to turn a blind eye or aggressively over-moderate.
All in all, PACT is a thoughtfully constructed proposal that is careful to define its terms. It provides enough specificity that it can be debated on its merits and subjected to constructive discussion and amendment. A further sign of the seriousness of Schatz’s proposal relative to the rest of the June cohort is its exemption of “internet infrastructure” services from these requirements. As others have pointed out, these providers (such as Amazon Web Services and its competitors) are categorically different from websites and social media platforms, and require a distinct approach. PACT suffers, however, from a seeming lack of technical consultation on several points that will need to be addressed. A few examples: Many small sites hit the one million visitor mark before achieving any scale in staff or revenue, call centers seem unlikely to scale with digital platforms in terms of process volume, and there is a history of court-ordered takedown processes being gamed by shady cottage industry law firms.
It’s no mean feat in the current environment to propose nuanced, bipartisan Section 230 reform. The biggest question though: Does it go far enough? While the Hawley, Loeffler, and the DOJ initiatives seem bent on construing all content moderation as necessarily “political,” Schatz and Thune seem to be saying that the only real responsibility platforms have is to respond in a timely fashion to a court order. In addition to being an important effort to put forward a 230 reform proposal that does not risk incentivizing censorship, PACT also demonstrates the difficulty in finding legislative solutions that both preserve free expression and make a real dent in the runaway problems of harassment, hate, and disinformation we are experiencing.
Matt Bailey serves as PEN America’s digital freedom program director, focusing on issues ranging from surveillance and disinformation, to digital inclusion that affect journalists and writers around the world.