Diagram of person silhouettes talking to one another; text in the center: “Online abuse isn’t just about hurt feelings.”

As part of our effort to #FightOnlineAbuseNowwe’re publishing a series of pieces about the harm online abuse poses to free speech—but also what Facebook, Twitter, and other social media companies can do to blunt its worst effects.

Writers and journalists increasingly rely on social media platforms to stay on top of news, find sources, engage with readers, and promote their work. Yet their visibility and the nature of their work—to challenge the status quo and hold the powerful accountable—can make them lightning rods for online abuse. They are relentlessly harassed in these spaces, especially if they are women, Black, Latino, LGBTQ+, members of religious or ethnic minorities, or if they cover topics such as feminism, politics, or race.

But there are concrete steps social media companies can take to reduce the devastating impact of online abuse by giving users more control over their privacy, security, identity, and account history. Here are five features they should build:

1. Safety Modes: Making it easer to tighten privacy and security settings

While social media companies give users granular control over their settings, it can be confusing and time consuming to figure out how these adjustments impact visibility and reach. For writers and journalists who require a public presence online to do their job, finding the balance between visibility and safety is full of trade-offs. When under attack, they often freeze their accounts until the trouble passes, but that means they can’t engage with friends, followers, or the public.

Platforms need to make it easier for users to fine-tune their privacy and security. Users should be able to save configurations of settings into personalized “safety modes,” which they can easily toggle between. When they alternate between safety modes, a “visibility snapshot” should show them in real time who will be able to see their content.

2. Identities: distinguishing between the personal and professional

Fusing personal and professional identities online can make writers and journalists more vulnerable, as abusive trolls leverage private information to humiliate, discredit, and intimidate them, their friends, and families. Social media platforms should make it easier to create boundaries between private and public “identities” online, while allowing users to keep their audiences. Users should be able to toggle between personal and professional identities, and migrate or share audiences between them. Platforms should also allow users to decide which subsets of friends or followers see their content—features that Facebook, Instagram, and Twitter are already experimenting with.

3. Managing account histories

While people may switch jobs and careers—and even shift their views over time—their social media histories, which can date back more than a decade, become treasure troves for abuse. Social media platforms should make it easier for users to manage their personal account histories, including the ability to easily search and review old posts, and make them private, as well as delete and archive content.

To preserve transparency and accountability, especially for social media accounts used by public officials and entities, it is critical that journalists have access to tools that archive the internet and to laws requiring public officials to retain records of communications that may be disclosed to the public.

4. Anti-harassment help centers: educating users on how to protect themselves

Social media companies have been improving their anti-harassment features, but many of these are still hard to find and navigate. Each platform should build a user-friendly section in their help center that deals specifically with online abuse, including internal features and links to external tools and resources. Facebook, Instagram, and Twitter need to get creative by using nudges, quizzes, sign-on prompts, and videos to get the message across. They must invest in training vulnerable users, like journalists and writers, to proactively use features that reduce risk and exposure to attacks.

5. Third-party tools

Beyond the major social media platforms, start-ups, nonprofits, and universities are building third-party tools to help counter online abuse. Some scrub private information from data broker sites; others help users manage their Twitter account histories. A handful enlist allies to help those facing abuse. Still others filter, mute, or block problematic accounts or demystify convoluted privacy and security settings. Many of these tools are still in early stages of development, are not sufficiently financed, or known widely enough to reach the majority of users in need. Some involve costs for the consumer, which may be an insurmountable obstacle for those who need them most.

Social media platforms should recognize the gravity of online abuse and support third-party tools—especially those built by and for women, Black, Latino, Indigenous, or LGBTQ+ technologists with direct experience of online abuse—by investing in their research and development, and providing access to the data and information they need to succeed. They should consider integrating third-party tools that have proven effective at mitigating online harassment.


Online attacks can damage mental and physical health, chill free expression, and silence voices that are already underrepresented in the creative and media sectors and in public discourse. By embracing these five concrete and actionable recommendations, social media companies can better protect all vulnerable individuals and create a safer online environment where writing, creativity, and ideas can flourish.