Deepa Seetharaman covers artificial intelligence and the way AI is transforming both business and society. PEN America spoke to Seetharaman to discuss the ways this rapidly evolving technology is affecting our politics and the state of disinformation today, especially as the line between Big Tech and government narrows. We discussed AI-generated disinformation, the declining trust in institutions, and whether the fear around AI is overblown.
This conversation has been edited for length and clarity.
You cover AI, technology and politics, and those things have shifted so rapidly over the past few years. Can you start by telling me how your beat has changed and how your coverage priorities have shifted?
I’ve been covering tech for about 11 years, and I started out covering companies. I covered Amazon for a minute, then I covered Facebook, now Meta, and now I write about artificial intelligence. And the thing that has been interesting, besides the actual subject change – it’s been about not just what these companies want to do. It’s become more, how is what they want to do going to affect the broader information ecosystem and politics? The tech has just grown, and as it’s grown, it’s become so much more political. All these decisions, like where to put a data center, it’s not a straightforward decision. I don’t think it was ever a straightforward decision, but there are even more factors to consider, and the political stance of each of these companies is more and more important and more and more visible. That has been the most significant change.
Would you say that’s made your work harder, to take into account all the political dynamics? Or do you just do the work and try to drown out all the noise and all the ideologies?
I try to take it a story at a time, but periodically it’s important to look at the broader picture and see how it all might fit together. At a certain point these companies are so big and so powerful, it’s like you’re covering a country when you’re talking about the platform and the ecosystem these companies live in. I just go story by story, and, you know, I don’t sit around and think my way through these things. It really helps to be a reporter in this scenario. I just talk to as many people as I can, because anything I think is never going to be as sophisticated and informed as a source, a person who’s worked inside these companies, an analyst who’s had to deal with the executives at these companies. It’s just never going to be as good as that.
I want to pivot a little bit. There was a concern before the election that deepfakes and other AI-generated mis- and disinformation like pink slime websites and disinformation campaigns from abroad were going to affect the election. How much of a role do you think that actually ended up playing in the outcome, if any?
It’s a little hard to say. I think right now, the broad conclusion is that it’s not as if there was one piece of fake news or one set of media that affected and warped what people thought was and wasn’t true. But I think the fact that there is AI has accelerated this broader mistrust in institutions, particularly media. “I don’t know what to believe. Everyone’s lying.” That’s the kind of attitude a lot of readers have, and I get where they’re coming from. With the presence of AI in the world, I see this with some of my family members. They look at it, and they’re like, “Yeah, this is probably AI, but who knows?” Some people just throw up their hands and they don’t want to think about it at a certain point.
I want to go back to that lack of trust in institutions, especially working at a legacy media organization. You work at the Wall Street Journal, a place with a long, storied history. How do you go about covering something like AI, which so often does enhance distrust, while maintaining a good relationship with your readers and getting them to trust you?
That is a much larger question. That’s a question that a lot of media organizations need to answer institutionally. I can only control myself, and the way I try to do that is by being transparent, trying to tell people how I work, and then doing that. It’s really clean and simple. I bounce things that I heard from one source off another source, and then I’ll do it with a third source, and I just try to talk to as many people as possible. But in doing so, I will continue to build trust in a one-to-one way over time.
I also find that a lot of the people, particularly here in Silicon Valley, have never actually talked to a journalist. Their opinions are really informed by the trainings they get at the beginning of their tenure at any one of these companies, where they give new employees a whole lot of rules about how to work at the company. And there is often a component of that, that is, “Don’t talk to the media. If you leak to the media, we will find you. Don’t do it.” If that’s your main interaction with what journalists want, there’s an immediate suspicion there. So I walk in knowing that – I don’t even get mad about it – that is going to be their impression of me, and it is up to me to just do my job the way I do my job, which I hope is on a regular basis ethically, transparently, openly and rigorously. If they decide to change their mind about the media after that, that’s up to them. If I just function like that over the course of multiple interviews, every source I talk to, then at least I can sleep at night. I feel good.
That’s very important!
There’s just the feeling of, “Oh my god, reporters are so deceptive.” A lot of people ask me, “Are you recording this conversation right now? Have you been recording all of our conversations?” I appreciate the paranoia. I’m like, “No, that’s illegal. California has two-party consent.” And then that kind of thing they get comforted by – there’s an additional piece of information, like, it’s actually illegal to do that in the state. And maybe then they’ll go look that up, and that adds to my credibility. I think credibility is like tiny pebbles, and I just over time will build something, and that really feels like what I can do as an individual.
Going back to the fear around AI technology: Some experts and researchers were saying that the risks of generative AI were being overblown. What are those experts and sources you talk to saying now? Has that conversation shifted in the past year?
I think the first year after ChatGPT came out, there was a lot of conversation about existential risk – could these systems be used in a way or developed in a way that harms us, either intentionally or unintentionally? And then over the second year, in 2024, it was not really about that. In terms of risks, there were risks around, ‘How will this affect the election? How will this affect the political system?”
I think a lot of people who are really worried about the existential risks are incredibly sincere in their beliefs. You also meet people who think that’s not the right thing to worry about, that we should worry more about near-term issues, like bias and thinking of the ways we include these systems in our daily lives. You mentioned pink slime sites earlier in our conversation. There are these news sites that look like news sites, but aren’t. They’re just full of AI content. For a lot of readers that are in news deserts and in these areas where they feel like they’re not getting enough local news, I could see how a reader who has a lot on their plate is not going to investigate every detail in that story, or even one detail – they don’t have a ton of time for it. So it becomes easier to just see these stories and see these headlines and move on, and you get an impression of your local area that may not be connected to reality. I also really worry about things like, can we develop AI systems that can cover a city council meeting? What would that even look like? What do they need to know? How do we know how to deploy them? How do we know if that’s the right mechanism? Because right now, those areas just aren’t being served. There are a lot of places where there’s just one reporter, and if that reporter has the flu, then that fundamentally changes what gets covered.
A lot of journalism just feels so brittle, and at least in my world and when I talk to sources about misinformation, that’s the thing they worry about, that somebody can use AI to very easily break open some of these brittle worlds and supplant real news, which is fading anyway. That feels like a really nuanced but important risk.
As these Big Tech CEOs gain prominence in the government and in this new administration, what are you and your sources expecting to change in people’s daily lives? What can news consumers and tech users expect from the next four years? Is there going to be any effect on people’s daily lives, or are all these advancements going to happen behind closed doors?
I don’t know, that’s the honest answer. I think there’ll be a little bit of both. I’m of the belief that the tech companies we have – and frankly, this is true of all companies – they’re personality-driven, you know, like team heads don’t get along, and that affects and shapes the way the product comes out. Or the CEO has gone through a personal evolution and has decided that this shouldn’t be the priority, that should be the priority. We had a story about Ford, and how the CEO of Ford went to China, and it was a huge come-to-Jesus moment, and he came back and said, ‘Okay, they’re kicking our ass, we need to regroup.’ It’s that one person’s vision, it’s that one person’s personality and priorities that shape the entire environment. So I think a lot about, how do I describe those personal dynamics inside a company, and then draw a line between those personal beliefs and dynamics and alliances and the evolution of different executives within a company with what we, as consumers, see in the world? Because there is a line. There’s a connection. This is very clear in tech. Mark Zuckerberg just decided that he was going to, in his words, get rid of the fact-checkers and start over and get to a place where he thinks that it should be free-speech-oriented. But also, it’s important to note in that same conversation that there were certain small, lightweight things. Like in Messenger, they had kind of themes that you could deploy. There was one that had a trans theme to it, and that was removed as one of the options. A lot of these things, they’re not separate. Like the political activities of the executives, what they say, their social lives, what motivates them, what drives them – all of that you can see in these products that billions of people use. It’s world-changing, right?
Do you have anything you want to discuss that I didn’t ask about, or anything else you want to say on the topic before I let you go?
It’s really important to understand the way technology works now. I cover AI, and for the first few months of this particular beat, I was asking people, what is AI? I got a billion different examples. Nobody had a definition that matched anybody else. That was really instructive because it told me how much of technology itself is still evolving. AI is a very evolved base, and there is a big myth out there that this thing is magic, and it’s not magic. It’s a lot of tiny decisions made on the back end that work in concert and sometimes in unexpected ways. And each of those tiny decisions, they’re the result of a human. Now, even if the human says, “You make the decision,” to the system, that is still a human kicking off that process, and that human has biases and interests and is following a certain set of expectations. My hope is that through the range of stories we write, I’ll make clear to people that as impressive as this is and as all-encompassing as these systems can feel, they’re human enterprises and can be changed and shifted, and aren’t the end-all be-all. A lot of people look at Silicon Valley and they are intimidated by how smart everyone seems and how powerful the place is, and this might be true of reporters, too. But I found in all my years covering this place, it’s the really simple, almost the dumb questions that are the most revealing. Just, how does this work? Why does it work like that? Why doesn’t it work like this? Those kinds of really basic questions will reveal to you how much of the system is kind of put together with spit and string, the extent to which it’s the product of people. That’s just a really important thing to keep in mind, that these are political decisions, and they will shape and affect all of our lives.
Deepa Seetharaman is a reporter covering artificial intelligence from the Wall Street Journal’s tech bureau in San Francisco. She previously covered the intersection of technology and politics, as well as Meta (when it was called Facebook). She joined the Journal after covering e-commerce and Amazon at Reuters.