As part of our effort to #FightOnlineAbuseNow, we’re publishing a series of pieces about the harm online abuse poses to free speech—but also what Facebook, Twitter, and other social media companies can do to blunt its worst effects.
“I want harassment to be as annoying for my harassers as it is for me to report it.” —Talia Lavin, journalist
All too often, the burden of dealing with online abuse rests squarely on the shoulders of its targets. To protect them and ensure their voices are not silenced, social media companies must actively discourage abuse and hold abusive users accountable—but in ways that do not themselves infringe on freedom of expression.
Nearly half of Americans have experienced online abuse, and 28 percent of them were subjected to its most severe forms, including sexual harassment, stalking, doxing, and physical threats. Such violations not only impact individuals psychologically and physically, but affect communities and societies by chilling free expression and exacerbating inequality. Women, Black, Latino, indigenous, and LGBTQIA+ writers, as well as members of religious and ethnic minorities, are more likely to be targeted. Accountability is critical to ensuring people can use social media to connect and share ideas, without being intimidated and silenced.
“You can’t have free expression of ideas if people have to worry that they’re going to get doxed or they’re going to get threatened,” said Mary Anne Franks, president of the Cyber Civil Rights Initiative and professor of law at the University of Miami. “So if we could focus the conversation on how it is that we can create the conditions for free speech . . . Free speech for reporters. Free speech for women. Free speech for people of color. Free speech for people who are targeted offline . . . that is the conversation we have to have.”
Platforms must be clear, consistent, and transparent about their policies prohibiting abusive behavior and their penalties for users who engage in it. Efforts to deter abuse that rely too heavily on taking down content or accounts can risk sweeping up legitimate disagreement and silencing the very voices they are meant to protect. Here are three key steps social media companies should take now to prioritize accountability, while protecting free expression:
1. Apply and publicize rules in real time
Social media companies must spell out their rules and make this information visible in real time within their platforms. Information on the policies that govern acceptable behavior, as well as what happens when these policies are violated, is often confusing and hard to find. Social media companies should tap into design features such as nudges to remind users about their rules and the consequences for violating them. When users create a new password, for example, they don’t have to go to a separate page to learn about minimum password requirements. Similarly, users should be able to quickly check content against rules without searching through pages of policies on a separate website.
Nearly 80 percent of people say social media companies are doing an only fair or poor job of addressing online harassment. And research and experiments in the gaming industry tell us that focusing on rules and enforcement can reduce abuse. One study found that when community rules are more visible, compliance as well as participation by newcomers increases.
2. Use friction to stop abuse
Social media platforms should introduce nudges that build friction into users’ attempts to engage in abuse. Platforms could automatically detect posts with abusive content and encourage users to revise them before posting, pointing out the consequence for violating policies.
Software that can automatically detect abusive content, while imperfect, already exists, and platforms such as Twitter and Instagram have started to experiment with nudges to discourage abuse. Platforms need to study and communicate openly about the efficacy of these methods, and give independent researchers raw data on the effectiveness of friction and on how automated systems are trained to reduce bias.
3. Escalate consequences for repeat abusers
Companies must map out a system of escalating penalties for abusive behavior online, clearly and repeatedly communicate it to users, and implement it consistently. While content takedowns and account bans are a necessary part of any accountability system, they are also fraught. Content moderation is imperfect and reporting can be weaponized to harass and silence. So it’s critical that platforms also make full use of warnings, strikes, functionality restrictions, and temporary suspensions. Users who repeatedly violate platform policies must face escalating consequences because impunity only emboldens abuse. At the same time, platforms must make urgent and much-needed improvements to appeals processes; users must have an efficient, accessible way to reinstate content or accounts that have been taken down due to malicious reporting or imperfect content moderation.
YouTube, which overhauled its system of warnings and strikes in 2019, offers a model worth exploring. When users violate the platform’s guidelines for the first time, they get a warning and an explanation of why their content was removed and which policies were violated. If the user violates guidelines again, they get a first strike, which limits how they can use the platform for a week. Two strikes means the user can’t post content for two weeks and if they get a third strike within a three-month period, their channel is deleted. They can appeal at every stage, and all of this is clearly spelled out.
***
By making their rules clear and visible in real time, using nudges to discourage people from sharing abusive content, and stepping up consequences for repeat offenders, social media platforms can greatly improve accountability. Any company that is truly committed to free expression must combat abuse and create conditions where people are not afraid to express themselves.
Learn what you can do to #FightOnlineAbuseNow.