
Cynthia Miller-Idriss | The PEN Ten Interview
Cynthia Miller-Idriss’ Man Up: The New Misogyny and the Rise of Violent Extremism is a revelatory and urgent story of how an explosion of misogyny is driving a surge of mass and far-right violence throughout the West.
In conversation with PEN America’s Digital Safety and Free Expression Program Coordinator, Amanda Wells, Miller-Idriss takes a deep dive into online abuse, the relationship between online extremism and book bans, and how AI is changing the landscape of online abuse, hate, and harassment. She also explores the warning signs that someone might be engaging in abusive content online, and guidance on how adults can talk to the young men and boys in their lives about this phenomenon. (Bookshop; Barnes & Noble)
In Man Up, you make the connection between online abuse against women and offline extremism, which relates to something we know from our Digital Safety work at PEN America: that online hate and harassment isn’t “just online” and its targets can’t “just log off.” Can you say more about the need to take online abuse seriously?
There are a few different reasons to take online hate and harassment seriously. First, it affects targeted individuals’ and groups’ mental health and well-being, even if they aren’t physically harmed. Verbal abuse—including online forms—can have a physical impact on the nervous system. One example I cite in the book is from the beta testing of a virtual reality game in which a woman player was virtually groped. She described how awful it felt, and research supports this: online harassment affects people physically, including with raised heartrate, rapid breathing, sweaty palms, and other nervous system responses in similar ways to physical attacks.
Some online abuse involves the sharing of nude photos or revenge porn, or the creation of AI-generated nude images or videos—in ways that exploit and haunt victims for years as they pop up repeatedly or are recirculated online. Other online abuse involves doxing, or the sharing of personal information including home addresses and workplaces in ways that lead to physical harms, threats, swatting, or attacks in person.
Beyond individual harms, online hate and harassment helps normalize, legitimize, and desensitize a wide range of users and viewers, lowering the bar for what might otherwise seem shocking or offensive. And because online hate is often framed as “just a joke,” it’s often dismissed by those who perpetrate it, who instead accuse victims of being “triggered snowflakes” who take everything too seriously.
“Vicarious victimization,” which occurs when people witness abuse that is not necessarily directed toward them, but toward someone who shares their identity, can lead to self-censorship. How does online abuse detrimentally impact the free speech of marginalized communities?
I learned the term “vicarious victimization” from Dr. Dan Relihan, a social psychologist and deputy director of research for my lab, the Polarization and Extremism Research and Innovation Lab (PERIL.) Dan had done some research on how the Pulse nightclub shooting affected members of the LGBTQ+ community across the U.S., even if they were not there and didn’t know anyone who was there. He found that people who identify as members of the same identity group as those who are attacked can experience some of the same distress symptoms afterwards as those who were there. Vicarious victimization can lead people to be hypervigilant or feel they must be less out in their queer identities, for example.
These experiences can lead people to self-censor or fail to speak up when witnessing abuse, or step away from public roles. A close colleague of mine who works on gun reform just withdrew from a university speaking event because the climate didn’t feel safe to publicly speak, for example. And that’s how we end up with censored ideas and less free speech—not only through direct suppression but also hate, harassment, and the threat of harm means that we have increasing numbers of people who are afraid to speak. We also see, these days, a lot of “comply in advance” behavior from people who are censoring the views of others (canceling talks, programs for students, training workshops etc) because they fear they “might” upset the administration. The Germans have an even better word for this: vorauseilende Gehoersamkeit, which means “hurry up to obey.”
How do you think about striking the right balance between protecting free speech online and protecting communities who are impacted by widespread online harassment, particularly those disproportionately targeted for their identities and/or for their work as writers, journalists, and artists?
Free speech is a fundamental American right and a value I personally cherish. I don’t want to see censorship or suppression from the government. But content moderation policies are something totally different. There is no reason why social media platforms—or any private company or organization—cannot be as strict as they want to be about what is allowed or not allowed in their spaces and places. That goes for dress codes and for content moderation policies and policies on professional conduct. If we don’t like those policies, we can vote with our feet and use other platforms or companies or products to demonstrate our own values.
In that context, what I would like to see is both a commitment to free speech in legal terms—and in protection from government censorship or attacks on academic freedom or the silencing of critics in the media or on late night television, etc—and stricter content moderation policies and enforcement of those on social media platforms that reduce the amount of hate, harassment, and harms that people have to encounter in their daily lives. The latter is not an affront on freedom of speech. It’s an assertion of private companies’ values and their commitment to a world that is safe, inclusive, and open to the participation of all, without girls feeling like they have to play online games with boys’ names (as 50% of them do) and without women, LGBTQ+ folks, and people of color feeling like racist, homophobic, and sexist or misogynistic attacks are just part of the cost of being online. That’s just not the world I want to live in.
Free speech is a fundamental American right and a value I personally cherish. I don’t want to see censorship or suppression from the government. But content moderation policies are something totally different. There is no reason why social media platforms—or any private company or organization—cannot be as strict as they want to be about what is allowed or not allowed in their spaces and places.
Your work on online harassment and extremism began before some of the AI tools that are particularly rife for abuse, such as generative AI and deepfake software, really took off. How has the landscape of online misogyny shifted with the explosion of tools that make generating and spreading abuse, particularly gendered disinformation, easier than ever?
This is such an important question not only because of what has happened in the past couple of years with AI- and tech-enabled harassment and harms (like AI-generated nude undressing apps and AI-generated sexually explicit photos, videos, and fake porn using real people), but because of what we might expect to come in the next few years and beyond. I worry about deepfake software getting better, and I worry about the impact of AI chats and boys developing parasocial relationships, AI psychosis, or losing touch with reality and thinking they have AI girlfriends, who they can get to be submissive, supportive, loving caregivers without ever having to engage in a reciprocal relationship or discussion about someone else’s needs.
On the flip side, AI content moderation has the potential to radically improve the accuracy and speed of harmful content removal, while protecting vulnerable human content moderators—who are often employed as contractors in overseas locations in developing world regions, and who have suffered horrific traumatic harm from having to view the worst kinds of videos and images imaginable. The more we can rely on AI to remove that content and spare humans from exposure to it, the better.
In Man Up, you draw a connection between book bans in the United States, of which there were more than 6,870 instances in the 2024-2025 school year, and online abuse and extremism; issue areas central to PEN America’s work. Can you elaborate on that connection here?
I talk about the massive uptick in book banning as part of a moral panic about school curriculum related to race, gender, and sexuality that is typically framed as a parents’ rights issue—i.e. to determine what kind of content their kids read in schools or public libraries. As I explain in the chapter on Erasure, the parents’ rights movement latched onto LGBTQ+ themed books and curricula sometime in the early pandemic period, claiming they were asserting their rights to protect their own children from exposure to sexual content and the “leftist agenda.” Parents filed complaints about school curricula and books, based on content related to sexual orientation, gender identity, as well as race and the teaching of slavery, the Holocaust, or other atrocities. The pace and scope of the bans accelerated rapidly through organized efforts from the same group of serial filers of complaints. One Florida school district closed libraries, covered bookshelves with black paper to protect students from “objectionable or illegal” content, and ultimately pulled more than 1,600 books from its shelves for special review in 2023—including dictionaries, encyclopedias, and world almanacs—because they mentioned “sexual conduct.”
These efforts are part of what I call erasure, which are tactics of misogynistic containment and enforcement that normalize, legitimize, and mobilize violence, including on the extremist and terrorist fringe. In the book, I define misogyny as an enforcement mechanism, or set of tactics, intended to help maintain or defend patriarchal norms and expectations. Knowledge erasure through book and curricular bans and attacks on gender and women’s studies in universities are tactics that critically underpin violent extremism and other forms of interpersonal and mass violence. Legislative and curricular erasure are examples of ordinary and everyday ways that misogyny is normalized, legitimized, and mainstreamed—which ultimately creates the conditions in which more extreme forms of misogyny and gender-based violence can flourish, including from the fringes. And these tactics of erasure have helped motivate extreme acts of harassment, threats, and violence against LGBTQ+ communities and their allies, in part through the demonization of LGBTQ+ people as “groomers” and falsely claiming danger and risk to children.
Of the major social media platforms, only TikTok, and recently Reddit, have classified online misogyny as a form of hate speech. Some platforms, like Meta, have even rolled back their existing guidelines on online abuse against women. What happens when platforms don’t consider online misogyny as hate speech?
There are so many bad outcomes from ignoring gender-based bigotry and hatred directed toward women and the LGBTQ+ community. At a minimum, it normalizes and legitimizes vile expressions that belittle, demean, and threaten people because of their identities. It can make victims feel unsafe, hypervigilant, or numb, making them less likely to report violations that do cross the line of company or platform policies, because they may come to see those interactions as just the cost of being online.
In January 2025, Meta CEO Mark Zuckerberg announced the company was ending its fact-checking program and introduced changes to content moderation that now allow users to say that women are property or LGBTQ+ people are mentally ill. This rollback of content moderation boundaries to allow for misogyny and anti-LGBTQ+ hate is a devastating development that will allow these forms of hate and exclusion to fester and become normalized.
Knowledge erasure through book and curricular bans and attacks on gender and women’s studies in universities are tactics that critically underpin violent extremism and other forms of interpersonal and mass violence.
You describe how dehumanizing language and online abuse, which might initially mask itself in memes and irony, can lay the psychological groundwork for acts of violence. Can you say more about how this psychological process works?
Dehumanization is not a requirement for acts of violence—there are plenty of acts of violence, unfortunately, in which the perpetrators know fully well the victims are human. But dehumanization is a pathway to violence in some cases and even a requirement for enacting it in others. We know this because of how soldiers are taught to dehumanize people, including non-combatants, with language like “collateral damage.”
In online contexts, there are many ways dehumanizing language can open up pathways to violence. One way is to literally remove a woman’s humanity: animal slurs, for example, are used derogatorily, to dehumanize or depict as ugly (in addition to bitch, common examples are heifer, fat pig, landwhale, dog, cow, porker, horseface), to sexualize and reduce women to sexual roles (cougar, kitten, wildcat, fox, bunny), or to otherwise diminish them (chick). In particularly vile online subcultures, women are dehumanized with words like foids (female humanoid) or roasties (a vulgar term for labia) or are referred to by categories of purportedly more attractive and unattainable women (Stacys) and average (Beckys) women.
What are warning signs that you, or someone in your life, might be engaging with content that can serve as a gateway to violent misogyny and/or extremism?
Teens and young men might start saying things like “women have it easier” or “feminism has gone too far” or “it’s better if women stay at home” or “women shouldn’t vote.” These are all things I’ve had parents tell me their sons have said. Kids might also start wearing T-shirts or putting stickers on their laptop or icons in their online profiles that are antifeminist or anti-LGBTQ+. A shirt or meme that says “I identify as an attack helicopter,” for example, is a classic dismissal of trans or non-binary people’s identities as silly or ridiculous. You might hear things like “feminism is cancer” or “there are no good women left,” or terms used in the manosphere like “alpha” and “beta” men or slang words like “Chad” or “simp,” “sigma,” or the “red pill.”
Using these terms doesn’t necessarily mean that kids are fully down the rabbit hole, but it does indicate they are exposed to these ideas in ways that merit discussion. My lab’s tool for parents and teachers on this, called “Not Just a Joke”: Understanding and Preventing Gender-and Sexuality-Based Bigotry, produced with a team from the Southern Poverty Law Center, is tested and available for free here and has additional examples and guidance.
Speaking up is both an act of solidarity to targeted group members and a challenge to those who espouse hate—but the most important audience may be everyone else, who need to hear that it’s not “just a joke,” it’s not cool or funny to say hateful things, and that it won’t be tolerated by peers or observers.
In the conclusion, you cite more positive messaging about boyhood and masculinity as one way to interrupt this ecosystem of online abuse and violence. What is the power of speech, particularly counterspeech, as an act of resistance against violence?
In the simplest of terms, words matter. They matter when we see them in advertisements and persuasive propaganda, they matter when parents say them, and they matter when we speak up and challenge harmful abuse and violence online and offline. When no one speaks up to even acknowledge a harmful act, members of targeted groups often read silence as support. So if you disagree with something you are hearing or reading, it’s important to counter and challenge it—because it helps counter what can be a slippery slope of normalization and legitimation of hateful ideas. Speaking up is both an act of solidarity to targeted group members and a challenge to those who espouse hate—but the most important audience may be everyone else, who need to hear that it’s not “just a joke,” it’s not cool or funny to say hateful things, and that it won’t be tolerated by peers or observers.
How can readers, particularly those with young boys and men in their lives, advocate for and create safer digital spaces?
Most important is listening to what kids are saying—over the dinner table, in the carpool, or when they are talking with friends—and be alert for how they are potentially being influenced by these ideas. It can be hard not to react with judgement or shame to those comments, saying things like “that’s not how we raised you!”. But the evidence shows the best approach is to lead with openness and curiosity, trying to avoid judgment. Discussions should include an acknowledgment that online communities have benefits as well as harms, opening the door to discussions about young people’s values and whether or not online influencers reflect those values. There are several specific things that parents, coaches, teachers, or other caregivers can do. To start:
- Talk to the boys and men in your life about the gendered content they see on their screens. What messages shape how they think about being a man? Boys need guidance and support to navigate what they see online, such as abusive and degrading behavior in violent pornography, and learning what it means to be an ally and what a broader range of healthy masculinity looks like as they grow into adults in a culture that promotes and privileges violence, stoicism, dominance, and hardness while rejecting and devaluing softness, vulnerability, sensitivity, and emotions.
- During these conversations, listen for grievance narratives (like feminism has gone too far, girls have it easier, women’s rights are taking away men’s rights, or everyone thinks men are all bad)—and try to interrupt the victim story in these framings. Help boys and young men understand how scapegoating works, and how isolation and loneliness are manipulated by influencers who blame women for their problems. Encourage thinking about structural solutions—would workforce training programs, or mentoring, solve anything? What else would change things for them for the better?
- Model healthy masculinities and expressions of manhood. Boys and men need positive messaging about manhood itself, especially during a moment when much of the discourse posits men as inherently dangerous, toxic, harmful, and violent. Helping boys navigate the transition to adulthood in ways that offer a sense of meaning, purpose, social wellbeing, and belonging is essential. Youth mentors beyond teachers are especially important. For example, athletic coaches can integrate discussions about healthy masculinity, treating women with respect, and rejecting violence into coaching. early, consistent, and positive engagement with men who embrace connection with others, who engage with empathy and love in caregiving roles, and who are role models for respectful treatment of women.
- Promote digital literacy and make kids the expert as they teach you about their online lives. Ask kids to explain how platforms work or how they engage them, what kinds of posts they like to make and what their favorite content is. Emphasize teaching youth about how bad actors may seek to manipulate or groom them for their own profit. Building skepticism about manipulation, how algorithms work, how outrage is packaged and sold, and how the dislike button is weighted much more heavily than the like button on some platforms is a good place to start.
Cynthia Miller-Idriss is a sociologist and professor in the School of Public Affairs and the School of Education at American University, where she is the founding director of the Polarization and Extremism Research and Innovation Lab (PERIL). An MSNBC columnist and a regular commentator in US and international media, Miller-Idriss is the author of Hate in the Homeland: The New Global Far Right (Princeton), The Extreme Gone Mainstream: Commercialization and Far Right Youth Culture in Germany (Princeton), and Blood and Culture: Youth, Right-Wing Extremism, and National Belonging in Contemporary Germany.









