This Q&A is part of a series of interviews with journalists and experts who regularly handle disinformation. The interviews will appear regularly through the election and beyond to highlight best practices, insights, case studies and tips. Our goal is to provide a resource to assist reporters and community members through the news events of 2024.
PEN America conducted this conversation with the Wikimedia Foundation because of its unique place in the fight against disinformation via human-led content moderation. The site’s volunteers are not traditional journalists or researchers, but they moderate a huge mass of information in a reliable and neutral format.
The answers below are from Maggie Dennis, vice president of community resilience and sustainability at the Wikimedia Foundation, the not-for-profit organization that hosts Wikipedia — one of the 10 most-visited websites in the world.
Is there a particular team at the Wikimedia Foundation charged with countering disinformation on Wikipedia? If so, who is on it and what’s the scope of responsibility?
For over 20 years, a global community of 265,000+ volunteer contributors has created and moderated all the information on Wikipedia. These self-governing volunteers make all editorial decisions based on shared policies that they have collaboratively created — one of which is that encyclopedic content must be written from a neutral point of view. This means that it must be verified by and attributed to a reliable source, as well as being presented fairly, proportionately, and as far as possible, without editorial bias.
Wikipedia’s volunteers review edits in real time, and trusted administrators with advanced permissions can further investigate and even block disinformation purveyors from further editing. This human-led content moderation process has been highly successful in preventing misinformation and disinformation on Wikipedia.
The Wikimedia Foundation supports the volunteers in this work. On the Foundation side, we have had a dedicated Trust and Safety team since 2020, which supports volunteers in identifying and countering disinformation campaigns on Wikimedia projects. As part of this function, the team alerts and supports the volunteer community on some extreme cases when they are reported to them. In some instances, we investigate larger issues of disinformation and other systematic forms of disruptive behavior that might appear on the site. We may also take action in a small number of cases, such as when the behavior of contributors on Wikimedia projects threatens the safety of other editors, keeps false information on the platform, or prevents volunteer communities from functioning properly.
How exactly does Wikipedia identify and counter disinformation on Wikipedia sites?
Most cases are handled by volunteers, including trusted volunteers who have access to tools that can track clusters of malicious accounts and evaluate account interactions over lengthy periods of time. Many of these tools are available across Wikipedia’s languages. When uncovered, these accounts are typically blocked from editing at a local or even global level, and other tools allow editors to reset any affected pages.
The Foundation supports these volunteers through its Trust & Safety team, which conducts deep research into disinformation campaigns and shares its findings with those volunteers. We also have the authority to take direct action in specific cases, typically in support of existing volunteer actions against bad actors across the Wikimedia projects.
Is the biggest bulwark against disinformation on the site the fact that it is crowdsourced by multiple committed editors? Or through a dedicated team and tech tools? What about pages that don’t get a lot of editorial attention?
We firmly believe that Wikipedia’s most important bulwark against disinformation is the platform’s open, transparent, and participatory model. The more humans take part, the better the internet’s knowledge becomes. Studies have shown that when a larger number of volunteer editors contribute to a Wikipedia article from diverse political backgrounds, it produces higher quality articles. On Wikipedia, as more people edit an article based on fact-checked sources and form consensus on what information should be present, the information becomes more reliable and neutral.
Wikipedia exists to inform, not persuade. If you choose to contribute to it, you must abide by its firm prohibitions against original research and its restrictions on “fringe” theories. New information is vetted and cited to reliable, independent, published sources with a reputation for fact-checking and accuracy, such as newspaper articles, peer-reviewed journals, and more. Notably, all of this work is available for anyone to view. Everything from the way an article grows and evolves over time, to the citations used to verify the facts, to the content discussions amongst editors, are all publicly available on any article’s history and talk pages.
Has generative AI posed a disinformation problem on Wikipedia? If so, how? What types of generative AI?
We have not seen any major impacts from the rise of generative AI. While AI can generate copies of Wikipedia articles, it has not overcome its “hallucination” problem. On Wikipedia, where information is required to have citations to real reliable sources, AI-generated information can be quickly flagged and removed by its human editors. Additionally, the Wikimedia Foundation’s engineers work closely with volunteer communities to monitor any sign of disinformation campaigns that might be mounted with automated software.
What tools do you use in addition to human review? Do you use AI tools to counter disinformation? Are you working on new and improved ways to use human review and/or tech to fight disinformation?
While all information on Wikipedia is created and curated by humans, its volunteers have a number of artificial intelligence and machine learning tools available to them for combating disinformation and other forms of malicious editing. Many of these tools have been built by Wikipedia volunteers, and the Wikimedia Foundation has been supporting their efforts for over twenty years.
For example, volunteers have developed and deployed a plethora of specialized tools for patrolling Wikipedia, including bots (e.g., ClueBot_NG, ST47ProxyBot), gadgets, userscripts, and extensions (e.g., Twinkle, LiveRC, Real time recent changes, PageCuration), assisted editing programs (e.g., Huggle, VandalFighter, AutoWikiBrowser), and web applications (CheckWiki, CopyPatrol, XTools, Global user contributions) — that help them quickly identify and revert wrongful edits.
The Foundation has a research team creating a new generation of AI models to support knowledge integrity by increasing the capabilities of patrollers (trusted users who review newly made Wikipedia pages), and identifying content where more citations are needed, among other things. Machine learning systems like the language-agnostic revert risk model can run in any Wikipedia language edition, making new content moderation products developed by the Foundation (e.g., Automoderator) accessible in more and more languages.
The Wikimedia Foundation has had a team dedicated to machine learning since 2017. You can learn more about the team and their work here.
Is there a difference to your approach for identifying and countering disinformation when it comes to covering breaking news as opposed to other types of articles?
Wikipedia is not a newspaper, so the volunteer communities that edit it prioritize accuracy over speed. Furthermore, when it comes to news that is ambiguous — such as evolving crisis situations — editors typically enact measures that ensure the information they are presenting is as accurate as possible. As always, Wikipedia’s information must be verified against reliable sources. As external sources become more aligned on what is factually occurring, this also influences Wikipedia’s content — editors are able to update and strengthen content based on fact-checked articles, often corroborating information from an wide array of sources.
What’s an example of one or your biggest successes countering disinformation on Wikipedia?
Wikipedia’s biggest strength remains its volunteers who are the first line of defense against any information that does not meet the standards of the website. Not only are these volunteers working behind the scenes to protect Wikipedia, they are also sharing their experiences and best practices with others. Last year, the Foundation compiled, for the first time, one central repository of the tools and systems developed by volunteers to tackle misinformation and disinformation. This repository is a valuable resource for the global Wikimedia volunteer community, as well as others interested in combating disinformation, to access and build upon, helping to strengthen the ways in which they address these challenges.
As a recent example, Wikipedia volunteers and the Wikimedia Foundation partnered in 2024 to proactively prepare for the largest election year in human history. Thanks in large part to the efforts of volunteers on Wikipedia, we were happy to see that there was no major disinformation activity on Wikipedia during the European Parliament election. Using their existing practices, Wikipedia’s volunteers were able to successfully navigate the threats of misinformation and disinformation to multiple elections held within the EU.
What are the most difficult disinformation challenges on Wikipedia and how are you grappling with them? Can you describe one of the worst instances of disinformation?
While it may not seem directly related to disinformation, we know it’s incredibly important for the public and decision-makers alike to understand how Wikipedia works, so that people everywhere can continue to rely on it as a trustworthy source of information. We face an ongoing challenge of helping people to understand the complexities of the unique, collaborative systems that make Wikipedia function. For example, many people don’t know that Wikipedia is operated by a non-profit organization, or that information on the site is created and curated by volunteers. Many also do not understand the rigorous requirements those volunteers have created and enforced for sourcing of articles or the ways they work collectively to ensure that the information selected is verifiable in reliable sources and neutral. These volunteers work diligently and around the clock to protect against disinformation as well as inaccuracy and bias. There is always room for those who want to learn the policies and processes and help!
What are you seeing in terms of disinformation related to the U.S. election?
The Wikimedia Foundation has put together an anti-disinformation task force in close collaboration with Wikipedia volunteers to monitor potential disinformation actions that could affect Wikipedia around the U.S. and other elections. That team has developed a brief overview on how different Wikipedia communities can handle disinformation attempts, and the Wikimedia Foundation is always on standby to assist volunteers in those efforts. At this stage in the U.S. election cycle, we have not seen notable disinformation attempts on Wikipedia.
That said, we know that disinformation is most prevalent and dangerous when public opinion is strongly divided, which is the case in the U.S. political landscape. When different groups of people have different opinions of what constitutes a “reliable source” — let alone what constitutes the truth — people can get very passionate about how a person, event, or idea is portrayed on Wikipedia. This is why the Wikimedia Foundation is so committed to protecting the privacy and safety of our volunteer editors. It is also why we have a small but dedicated team of lawyers who work hard to protect volunteer editors’ freedom of speech. Our Global Advocacy team advocates for laws that protect and support their right to edit Wikipedia, set rules for what constitute reliable sources, and enforce them without fear for their safety or livelihoods.
To the extent that Wikipedia has developed particularly effective ways to counter disinformation, could those be duplicated on other tech platforms?
Wikipedia’s ways of countering disinformation cannot be easily duplicated. Wikipedia’s volunteers designed and cemented its self-governing policies and procedures at the height of Web 2.0 — in internet time, an epoch ago. It is a living testament to the utopian ideals of the old internet. Today, Wikipedia’s longevity is part of its continued strength in the face of new threats.
That governance structure alone would be difficult to design from scratch today. It was certainly hard enough for Wikipedia’s volunteers roughly two decades ago. But then someone looking to replicate Wikipedia’s model would also need to create a strong community of humans who strongly share a common goal, so much so that they happily donate hours upon hours of time to curate content for the betterment of the world.
What could potentially be replicated by others is Wikipedia’s strong commitment to verifying its information against quality reference material. If a reader goes down this road, we would suggest using Wikipedia’s robust set of criteria for defining what a “reliable” source is as a starting point. In short, information from questionable sources like a Telegram post is in nearly all cases not allowed, while information from the Associated Press is a gold standard.
Wikipedia’s volunteer contributors carefully consider whether sources are appropriate for use on Wikipedia. For examples, editors meticulously parsed the reliability of the television channel Fox News and decided to disallow its use only in the topic areas of politics and science, and they disallowed use of post-2022 articles from the once-respected CNET in response to a decline in quality and the use of AI-generated content.
Are there things that journalists fighting disinformation could learn from what you do?
Wikipedia and journalists are natural allies in the fight against disinformation. Wikipedia mirrors the world’s source material, including news articles, and it relies on those sources to build its knowledge base and disseminate to wide audiences. Journalists often refer to Wikipedia for reporting or for research to find reliable sources of secondary information. Both journalists and Wikipedia volunteers face threats in various parts of the world, and they are essential to ensure reliable information is available to everyone, everywhere.