Yaёl Eisenstat Interview Hero Image

These Disinformation Trends Worry Expert Yaël Eisenstat in the Run-up to the U.S. Election

April 4, 2024

As a tech and democracy expert heading into the 2024 election, Yaël Eisenstat’s biggest concern isn’t necessarily that A.I.-generated deep fake videos or synthetic audio will affect the vote. It’s that the mere awareness of such tools might cause people to distrust any information they see.

The 2020 election saw QAnon conspiracy theories make their way into the mainstream, before such sophisticated artificial intelligence tools existed. Now, not even photos of Catherine, Princess of Wales, escape deep scrutiny from the public.

“Journalists have an incredibly difficult job at a time when they are being increasingly under-resourced. Now they have this extra challenge of having to figure out which images, audio, and video are accurate while still facing pressure to report quickly. I don’t envy that challenge,” Eisenstat says.”

Eisenstat has long advocated that one thing the social networks could do to slow the spread of disinformation is to build in more “friction,” something most have been reluctant to do because of their engagement-based business models. Journalists face their own challenges of not just verifying news, but realizing that even official sources could release something that has been artificially generated, doctored, or altered.

Eisenstat is an advisor to PEN America’s disinformation programs, focused on helping journalists, political advocacy, and building community resilience. In an interview with Katrina Sommer, spring 2024 Free Expression fellow at PEN America, Eisenstat provided insights into the complexities of the current technology landscape and challenges in combating disinformation, and she highlighted a multipronged approach to address these challenges.


You spent much of your government career focused on counter-extremism. What led you into combating disinformation?

How information campaigns are used to radicalize people and how to counter those efforts are issues that I have focused on throughout my entire career, beginning before 9/11 and before social media was part of this conversation. Around 2015, it became clear to me how the same tactics that were being used in the analog world were actually becoming easier to do both at scale and without that sort of human touch point, as more and more of our lives migrated online.

So I come to this work from the lens of how people are radicalized and what interventions there are to try to reverse those trends. When I talk about radicalization, I’m talking about when some people are pushed into further and further extreme views to the point of either acting upon those views in an anti-democratic way or refusing to view anybody with an opposite view as someone worth engaging with or having in your life at all. That’s why I shifted to focusing on how social media, AI-powered algorithms, and now generative AI affect these very issues.

You joined Facebook in 2018 as the head of global elections integrity for political ads. When you left after six months, you criticized them publicly for their failures to protect the integrity of elections. When you look at the social media landscape now in terms of election integrity, is it better or worse than when you were inside six years ago?

It’s actually more difficult to tell if it’s better or worse, which in itself is problematic and makes me feel like it is likely worse. Here’s what I mean by that.

I had already started publicly criticizing Facebook (now Meta) for some of its decisions and business practices before becoming an employee. When I joined in 2018, it was a pivotal moment for the company. They made the job offer to me one minute after Mark Zuckerberg stepped out of that Senate hearing in 2018, where he promised over and over to take election integrity and their responsibilities more seriously. So I went into this role cautiously but with the hope that they would start prioritizing these issues.

However, I continued to fundamentally disagree with many of their business and design decisions, including their decision that political figures would be exempt from the same rules that everybody else had to abide by on their platforms, and left six months later.

Fast forward to 2020, many of the platforms did have election integrity policies in place and did seem to at least prioritize some of that work leading up to election day in the U.S. However, Meta and others had let false narratives like “Stop the Steal” spread like wildfire on their platforms, and as I already mentioned, were exempting political figures from their own rules. We all know where that led on January 6.

But rather than learn the lessons of that day, it seems companies are backsliding. Many of these companies have laid off large portions of their trust and safety teams over the past year; they have gutted the very teams that protect things like elections or enforce their policies around hate speech, harassment, and incitement to violence.

On top of that, there has been a multi-front assault on the people who try to study and shine a light on disinformation and dangerous online narratives that lead to things like voter suppression and election-related violence. In this supercharged legal and political climate, it seems to me that the companies are retreating back to pre-2016 stances.

I cannot definitively say we are worse off, because it is possible that these companies have plans that they’re just not letting the rest of us know about. But from everything that I can see and from everything that the mainstream social media companies have communicated to the public, they have retreated to a stance that is much less prepared, and frankly, does not seem to prioritize this work anymore.


“I come to this work from the lens of how people are radicalized and what interventions there are to try to reverse those trends. When I talk about radicalization, I’m talking about when some people are pushed into further and further extreme views to the point of either acting upon those views in an anti-democratic way or refusing to view anybody with an opposite view as someone worth engaging with or having in your life at all.


You have discussed the idea of adding more “friction” to platforms. What does that term mean within the context of the Internet? With more friction on social media, would that help prevent more disinformation from spreading? If so, what would that look like?

Friction helps slow something down; it’s like a speed bump. Friction allows people to at least have a temporary, even if only a few seconds, pause for their brain to process and think: “Wait a minute, why am I believing this headline? Is it because it’s real or is it because it’s making me have an emotional reaction?” By doing that, your brain has a moment to catch up to your initial, emotional response.

The problem is that with most mainstream social media companies, their business model is predicated on a frictionless experience: to keep scrolling without the page slowing down, to respond to something immediately, and to keep you continually engaged on their site as long as possible. This is how they make money: keep you engaged so they can continue to hoover up your data, gain enough intimate information about you, and sell tools for advertisers to target you with relevant ads. Algorithms know what content you will find the most engaging, and your next video is recommended before you’ve even fully finished watching the current video. Slowing down, thinking about what you’re seeing, questioning its accuracy, fact-checking, clicking on the article, and reading the page on its original news site… none of these are frictionless experiences.

In October 2020, X–Twitter at the time–experimented with adding friction as a way to stem the spread of mis- and disinformation ahead of the U.S. election. For example, users were prompted to add their own commentary before retweeting something. You may recall seeing pop-ups asking if you’ve actually read the article, or you would have to put your own thoughts in if you wanted to retweet.

Why did they do that? That’s building in friction. It’s making you have to stop and think about the piece of content before reacting from an emotional place; it’s building in a few seconds for you to be thoughtful about whether you really want to share the piece. And my understanding is that Twitter actually proved that it did slow the spread of mis- and disinformation.

But here’s the interesting thing. They stopped doing that after the election. So they had an intervention that proved to slow the spread of disinformation, but they did not adopt it permanently.

Why? Because when you make people stop and slow down, you’re also making them slow down on sharing content, liking content, and scrolling to the next thing. And again, that is antithetical to a business model that has to keep you constantly engaged to thrive. So yes, I believe that there are lots of ways you could build in friction. The question is, are the companies incentivized to do so?

To the extent we can see where this technology has and will continue to evolve, what are the emerging trends or tactics in AI-augmented or generated disinformation campaigns that concern you the most?

I’ll start with AI. My concerns have been similar for years. It’s AI-powered algorithms that are helping to fuel some of the most harmful content and behavior online. That has always been one of the things I have and will continue to focus on.

The newer challenge is the ease and availability of these new generative AI tools that can actually generate content based on your prompts; things like ChatGPT, “deep fakes” or synthetic audio–where you can actually create fake audio based on a very small clip of my voice.

The danger that concerns me most is not necessarily if deep fakes or fake audio are going to actually persuade millions of people to vote one way or the other, or incite large numbers of people to engage in political violence or to not believe in facts over disinformation. One of my bigger concerns is that the mere awareness that these tools exist might cause people to stop trusting any information, regardless of how reliable it might be. And that pervasive loss of trust undermines democracy, including by promoting resigned or disgusted passivity in the face of the serious challenges we have.

We know that bad actors will exploit this trust deficit, in what is commonly referred to as a “liar’s dividend” scenario. For example, we could see a candidate running for office getting caught on an audio recording saying something horrible. All that candidate has to do is say “That wasn’t me. That’s fake.” Whether or not journalists prove the recording’s authenticity doesn’t matter; doubt has already been cast in the public’s eye.

I want to re-emphasize this point: It’s not necessarily whether or not deep fakes will become so sophisticated that they truly change the landscape of the election. But they will cause even less trust in an information environment where trust is already at an all-time low.


The danger that concerns me most is not necessarily if deep fakes or fake audio are going to actually persuade millions of people to vote one way or the other, or incite large numbers of people to engage in political violence or to not believe in facts over disinformation. One of my bigger concerns is that the mere awareness that these tools exist might cause people to stop trusting any information, regardless of how reliable it might be.


What is some advice you would give journalists struggling to navigate the disinformation and generative AI landscape?

As we are leading up to a very heated election here in the U.S., one of my concerns is how journalists can continue to do this work without inadvertently casting even more doubt on the information landscape.

For example: when a clip comes out from an official source or candidate, a journalist would fact-check that information before publishing. But now journalists have the extra challenge of realizing that the candidate or official source themselves could release something that has been artificially generated, doctored, or altered. But, as part of the world that social media has helped create, journalists know that their pieces have to be fast. They have to get the public’s attention. They have to be created quickly to out-compete the others or before the public has moved on to the next story. And some of that’s going to happen before they have fully been able to authenticate everything.

So a question is: how can journalists use appropriate caveats in language, such as “we cannot independently authenticate this, we cannot independently verify this” without also signaling to the reader or the listener to distrust everything? I don’t have the perfect answer to this. But I do think we have to build a whole new skill set for journalists to navigate that environment while not sowing even more distrust in the news itself.

Today, even the concept of “disinformation” itself has become politicized. How do we discuss these topics?

In recent years, there has been a multi-front assault on people who study disinformation, people who shine a light on disinformation, and people who try to expose bad actors. Between lawsuits against researchers and civil society, recent court cases implying researchers and governments should not be able to interact with social media companies, and Hill hearings against certain researchers, it is very clear that political and private actors are intentionally increasing the burdens of studying disinformation.

It is an intentional campaign, in large part driven by partisan politics, that has led to the term “disinformation” itself becoming a dangerously politicized word. And there has been a chilling effect on people who study disinformation, who try to combat disinformation, who try to protect our democracy against disinformation campaigns.

We are in a precarious situation right now if people start feeling like they can’t even acknowledge that disinformation is in itself a problem and work to try to figure out how to combat it.

Despite the challenges, what makes you optimistic when working on combating disinformation?

I know that some are discouraged because they feel we haven’t been making progress in this fight against disinformation, especially when it comes to social media. But I would argue the opposite.

This is one of the greatest challenges of our generation, and it will take years of working from every angle. There is no “silver bullet”, and we won’t suddenly “end” disinformation or reform our systems, especially not in just a few years.

I think the fact that more and more people are not only aware that this is truly a problem, but are trying to seek solutions from all over society, whether it’s innovative solutions by building technology that can use AI to better engage in content moderation, or through programs that are helping fund local newsrooms, or educators trying to teach their students how to be better consumers of online information, or community-based programs to build resilience… these are all positive things that show that more and more parts of society are aware of and concerned about this problem.

I think there are major challenges we haven’t solved, such as the legislative landscape, which includes what accountability should look like for tech companies who themselves are proliferating harmful content. But I am encouraged by how many different parts of our society want to figure out how to combat some of these problems. And, I think that’s a great thing.


Yael Eisenstat is a Senior Fellow at Cybersecurity for Democracy, working on policy solutions for how social media, AI-powered algorithms, and Generative AI affect political discourse, polarization, and democracy. She is an advisor to PEN America’s disinformation programs.