A Journalist’s Guide to Navigating Disinformation on Social Media

Social media can be a powerful tool for newsgathering and reporting. It can help journalists connect with sources from around the world, share breaking news in real time, and elevate otherwise local news stories—like the Flint water crisis—to national attention. Real-time eyewitness reports of breaking news, including live video and photos, can further enhance and inform news reporting.

But with those many benefits come some dangerous byproducts. The rapid evolution of how content is shared, how quickly it’s shared, and by whom, makes it uniquely challenging to identify and monitor disinformation online—especially when you’re a reporter on deadline. Social media’s unique ability to amplify false information to millions of people within seconds makes it a highly attractive resource for bad actors looking to manipulate narratives, influence public opinion, and perpetuate online hate and harassment.

So how can you guard against inadvertently sharing false information in your reporting? In addition to the verification and fact-checking techniques you already use, the following guidelines can help you navigate the uniquely treacherous and fast-moving landscape of social media.

1. Anticipate potential areas of vulnerability for disinformation and trust your instincts.

Disinformation is designed to capture our attention by eliciting a strong visceral or emotional response, such as fear, outrage, or empathy. Take for example, a recent AI-generated photo that appeared to depict an explosion near the Pentagon, a deepfake video that slowed down Nancy Pelosi’s speech to make her appear intoxicated, or a photo of a child crouching in the rubble that was purportedly taken in the aftermath of a 2023 earthquake in Turkey but was actually taken in Ukraine in 2018. The emotional responses these images elicit can temporarily impair our ability to think critically. As a result, we may share or amplify disinformation before considering its credibility.

If a post appears to play on sociocultural vulnerabilities or catch improper activity red-handed—for example, an image of workers counting ballots but that claims to be of individuals counting fake ballots after election observers were sent home—consider whether another interpretation might be possible. If you sense that a post might spark alarm in others or if you have a particularly strong emotional reaction to the content, trust your instincts and investigate further.

2. Ensure the source account is credible and legitimate

Before reposting or reporting on content shared on social media, analyze the poster’s profile, including their username, photo, bio, and the accounts they follow. Consider whether the poster might benefit from spreading false or misleading information about the subject of the content. Check to see if the poster follows accounts affiliated with extremist groups or other groups known to spread mis- and disinformation or if they frequently tag journalists or public figures to attract attention. If so, be cautious of sharing or reporting on the content without further fact-checking and corroborating evidence.

While mis- and disinformation is often spread by live individuals, some disinformation is spread by automated bots. Though designed to appear and behave like humans, these bots are used to spread false information and manipulate public opinion. If you suspect an account may be a bot, check for some of the following indicators:

  • A blank or sparse bio with no easily verifiable information in the profile, e.g., no job title or workplace.
  • A recently created account that already has thousands of followers.
  • A feed predominantly consisting of political or sensational posts (including re-shares of prominent political voices) with little to no content about the account holder’s personal life, interests, or local community.

Bot Sentinel is a helpful online tool for analyzing accounts and posts on X, formerly known as Twitter. It ranks accounts on a spectrum from “Normal” to “Problematic.” For a step-by-step guide on how to use Bot Sentinel, check out our resource on Detecting Disinformation.

And for more practice identifying automated accounts, check out the Spot the Troll Quiz developed by the Media Forensics Hub at Clemson University.

3. Use contextual analysis and online tools to verify the authenticity of photos and videos.

In today’s Insta-fueled culture, a picture may be worth more than a thousand words. But not all images claiming to depict current events are legitimate. Some are miscaptioned, misidentified, manipulated, or drawn from prior events. Of course, it’s always best to confirm the accuracy of an image with the original poster, an eyewitness, or another trusted source. But if news is breaking fast, or you’re having difficulty tracking down a source, here are some tools to help you spot potentially manipulated or misleading images:

  • Scan the photo or video for timestamps that may reveal the date and time the image was captured. Bad actors often try to pass off old photos and videos as depictions of current events.
  • Check for any geographical markers that may help you identify whether the photo or video was captured at the claimed location. Look for street signs, building names, or other landmarks and use a tool like Google maps, or Bellingcat’s OpenStreetMap Search Tool to help verify their location.
  • Look for corroborating contextual clues. Does the image show people with umbrellas walking through the rain? Check the weather report for the purported location of the image at the date and time it was allegedly captured.
  • Use online tools like Google Reverse Image Search, TinEye, or RevEye to gather information about the original source of a photo, or InVID to analyze and gather contextual information about a video. Many of these tools have browser extensions that allow you to analyze an image without leaving the social media site itself. Our Disinformation Defense Toolkit offers step-by-step instructions on how to use these tools.
  • If you’re concerned a video may be a deepfake, pay particular attention to the face—eyes, skin color/smoothness, lip movements—as many manipulated images contain altered facial features. If the video features a public official or purports to be taken at an official event, check to see if there are any official statements or press releases accompanying the video.

If reporting on a photo or video you’ve discovered to be fake, manipulated, misidentified, or miscaptioned, be sure to watermark any portion of the image you share to make clear that it is false and to prevent further manipulation from bad actors. Clearly describe how the image or context has been manipulated rather than solely labeling it as manipulated.

4. Use your social media accounts to boost credible information and build trust.

Many people use social media as their primary pathway to information, and they may be following you as a trusted source. When a piece of content is retweeted or amplified by a journalist associated with a news outlet, the imprimatur of credibility that media organization enjoys can elevate and legitimize the information, whether it has been reported or verified by that outlet or not. When sharing content or reporting on social media:

  • Be transparent about your reporting choices. If you called out mis- or disinformation in your reporting, explain why and cite your sources.
  • Choose your words carefully. While it’s tempting to use clever or catchy language–or even sarcasm–in your posts, consider how a bad actor may attempt to use those words or images as a means to spread disinformation or discredit your reporting.
  • If you feel comfortable doing so, consider responding to thoughtful questions or comments on your posts.
  • Pose questions to your followers about what they’ve found confusing or frustrating about recent news stories and consider following up with further reporting.
  • If you discover any errors in your reporting, correct them as soon as possible. Take time to explain why and how the error occurred and, if applicable, what steps you or your newsroom will take to help prevent a similar situation in the future.

5. Protect yourself against online harassment and abuse.

Navigating social media can be challenging under the best of circumstances. With the spread of mis- and disinformation threatening to undermine public confidence in the news, reporters must be diligent in defending themselves against online harassment and abuse. PEN America’s Online Harassment Field Manual offers useful strategies and resources for responding to online abuse, tightening your digital safety, practicing self-care, and providing assistance and support to your colleagues.

What’s missing?

Facts Forward is a collaborative effort. If you have questions, suggestions to improve this resource or others, or want to highlight disinformation reporting done well, please send an email to factsforward[@]pen.org.

Send us an email