Navigating Generative AI and the Threat of Disinformation

Today, many newsrooms are exploring how generative AI can assist with basic or discrete tasks—such as data-heavy reporting on financial markets—to help free up time for more complex news gathering, editing, and reporting. But the benefits offered by generative AI come with significant potential risks. The content produced by generative AI is only as good as the source material it pulls from. With mis- and disinformation swirling online, it can be difficult to ensure that AI-generated content is accurate and credible. Generative AI tools have also made it easier to create more sophisticated false imagery and video and audio content, making it increasingly challenging for you and your audience to detect fake or manipulated “news.”

The importance of knowing how to responsibly utilize AI-generated content and how to debunk AI-generated mis- or disinformation is clear, but the best way to do so is not. Existing AI-detection tools have failed to consistently and effectively identify real images vs. AI-generated images. As generative AI grows more sophisticated, researchers are working to develop new tools for detecting content created or manipulated by AI technologies, but when such tools may be available—or how effective they may be—remains unknown. In the interim, we offer the following guidance for using generative AI and for guarding against potential mis- or disinformation created by AI tools. For a more detailed analysis on emerging free expression issues raised by the increased prevalence and usage of generative AI, check out PEN America’s 2023 report, Speech in the Machine: Generative AI’s Implications for Free Expression.

Lean in to healthy skepticism and trust your journalistic instincts.

Generative AI tools not only make it easier for bad actors to spread mis-and disinformation, the ease of access to these tools means that even users who merely experiment with the technology for fun can inadvertently create confusion. For example, in March 2023, while media outlets were reporting that Donald Trump may be indicted for falsifying business records, Eliot Higgins, founder of the investigative journalism group Bellingcat, shared images on Twitter that he created using an AI image generator. The images appeared to show Trump being arrested. Higgins stated clearly that the images were AI-generated, but they were quickly shared without that context, in one case with the caption: “#BREAKING : Donald J. Trump has been arrested in #Manhattan this morning!” Donald Trump himself recently shared a manipulated video of CNN host Anderson Cooper on his Truth Social account. The video’s creators used an AI voice-cloning tool to distort Cooper’s reaction to the town hall with Trump that CNN hosted in May. 

As of mid-November, 2023, Google will require that AI-generated or manipulated election ads posted on Google platforms, including YouTube, include a prominently placed disclaimer. While this is a step in the right direction, it applies only to election ads and places the onus on the poster to truthfully acknowledge the use of AI tools. Images leave a lasting impression, and even less sophisticated imagery can be convincing as people scroll quickly through a social media feed. If you have a particularly strong reaction to an image or video, or if you sense it may be catering to biases or vulnerabilities, trust your instincts and investigate further. Consider whether another interpretation of the content may be possible and who might benefit from spreading false or misleading information about the subject of the content. While it is always best to verify the accuracy of the content with the original source or another trusted source, free online reverse image and video search tools can help you spot contextual inconsistencies or identify whether any part of an image has previously appeared online. You can find step-by-step instructions on how to use these tools in our Disinformation Defense Toolkit. Keep in mind, however, that as helpful as these tools are for providing context, they were not created or designed to identify AI-generated content.

To combat generative AI’s potential for creating realistic fake news sites, work to build trust with your audience and establish your news outlet as a go-to source for credible news.

Generative AI technology makes it easier to create entire fraudulent news platforms that look credible and convincing. “Pink slime journalism”—a practice by which hyper-partisan news sites disguise themselves as professional local news outlets—has been an increasing concern in recent years. But most of these sites have been relatively easy to identify: the articles are obviously regurgitated press releases with no reporter bylines. However, generative AI could eliminate such indicators. In a February 2023 article, Poynter showed that ChatGPT can generate an entire fake news organization—complete with reporter bios, masthead, editorial policies, and news articles—in less than half an hour. Media literacy efforts often teach news consumers to look for things like corrections and ethics policies, information on ownership and finances, and newsroom contact information to help assess if news sources are legitimate. But if these can all be convincingly invented, the public’s ability to identify credible news outlets is dramatically weakened. The more your news outlet can establish trust with its audience and community, the more they will look to you as their primary source for credible news and information. For tips on helping to grow trust with your audience, check out our guide on Building Disinformation Resilience.

If you use generative AI in your newsroom, consider its potential to reproduce social bias.

AI systems reflect the biases and predispositions of their creators and the information upon which they draw. There are, however, unique and subtle ways in which generative AI can create and reproduce bias. A website that uses algorithms to curate content, for example, might inadvertently highlight more white, male writers if the algorithm itself was trained on a data set that skews toward white, male writers and includes fewer writers of color or female writers. Such tendencies could reproduce systemic societal biases and inequalities, potentially reinforcing existing disparities in representation. If you use generative AI in your research or writing, be mindful that the content produced may not reflect the full range of voices on a given topic and may not be representative of the interests and demographics of your community. When reviewing AI-generated content, ask yourself what perspective(s) may not be reflected and independently seek out other sources of information.

Continue to independently fact-check all information produced by generative AI.

Even in the absence of bad actors, generative AI tools can produce mis- and disinformation. The language models behind generative AI chatbots are trained on existing content. The widespread prevalence of disinformation online makes it inevitable that such falsehoods form part of the data set on which large language models are trained. This poses challenges for ensuring the content created by chatbots is credible and fact–based. To the extent your newsroom uses generative AI for research or writing tasks, be sure to independently fact-check the results and utilize the tools you already employ for spotting disinformation. Because of the vulnerability for disinformation in AI-generated content, we do not recommend using generative AI for reporting on breaking news or for fact-checking.

Establish a policy for how and when your newsroom will use generative AI.  

Newsrooms should establish clear policies and procedures for how and when they will utilize generative AI. These policies should include, for example, what types of tasks may be delegated to generative AI, what tasks must be performed independently by a staff member, which specific tools the newsroom will utilize and which (if any) it will not, and what review and fact-checking processes will be implemented for AI-generated content. For examples of how newsrooms are implementing these policies, check out the Nieman Lab’s July 2023 roundup. Read the Associated Press guidelines for using generative AI here.

Be transparent with your audience about your generative AI policies.

Let them know when—and to what extent—AI-generated content was utilized to research or publish a story.

What’s missing?

Facts Forward is a collaborative effort. If you have questions, suggestions to improve this resource or others, or want to highlight disinformation reporting done well, please send an email to factsforward[@]pen.org.

Send us an email