(NEW YORK)–Warning that generative artificial intelligence could pose new threats to free expression by supercharging deception and repression and infringing on the work of writers and artists, PEN America today released Speech in the Machine: Generative AI’s Implications for Free Expression, an analysis of this watershed technological moment and its implications for civic trust, free speech, creative expression, authenticity and the very notion of truth.

The leading U.S. free expression and writers’ organization, in its new white paper, addresses knowns and unknowns about generative AI tools while offering principles to guide policymakers and others wrestling with the possible impacts on society.

The white paper adds free expression to concerns being voiced over generative AI, warning that the impact on this basic right— for good or for bad— will depend on whether governments and private companies “define and execute their human rights responsibilities.”

The analysis comes as the new technology has emerged as a central issue in the Hollywood writers’ and actors’ strikes, with both groups raising concerns over the use of their creative output and generative AI’s potential to harm livelihoods.  Authors have also filed suit arguing that current generative AI models infringe on their copyrights.

“If writers and creators are increasingly displaced by machines, it poses a threat not only to those creative artists, but to the public as a whole,” the white paper states. “The scope of inspiration from which truly new creative works draw may be narrowed, and the very power of literature, television, and film to catalyze innovative ways of thinking may be undercut.” The paper suggests that its use in creative fields could potentially lead to works “that are less rich or reflective of the expansive nuances of human experience and expression.”

Suzanne Nossel, PEN America’s CEO, said:  “We cannot afford another failure of imagination when it comes to the ramifications of generative AI for society.  As we come to grips with the immense potential of large language models, we need to think expansively about their potential to reshape our workplaces, schools, culture and communities.  Large language AI models are innately derivative; the greater their influence the more uniform our television, film and books may become.  Their potential to tailor entertainment output to meet user demands may satisfy the consumer while impoverishing our human collective experience. Social media has taught us that once these systems grow to scale, their power to reshape society can outstrip the ability of courts, regulators and even creators themselves to contain potential harms.  We are on notice that these risks exist, and bear an urgent responsibility to assess and manage them before AI overtakes us.”

READ Suzanne Nossel’s essay in The New Republic, “Hollywood’s Fight Against AI Will Affect Us All,” which discusses the potential of generative AI to reshape culture, discourse and our dealings with one another.

The 36-page PEN America white paper argues that generative AI tools may spur inspiration and ingenuity–or overtake human communication in ways that undercut authenticity in public discourse and the underlying value of open expression.

“Generative AI and automated tools represent a sea change in how artists, journalists, and writers create, interact with, and disseminate content, and how the public understands and consumes it. These changes offer opportunities for new forms of expression and creativity, while simultaneously posing threats to expressive conduct,” the white paper states.

Summer Lopez, chief program officer for free expression at PEN America, said: “Generative AI is on a path to revolutionize how humanity engages with language and information, and to reshape imaginative possibilities across many fields. As these technologies evolve, we must safeguard the uniquely human spark of inspiration that drives the creative process, protect the work of writers and artists from usurpation or exploitation, and anticipate how generative AI can turbocharge existing threats to free expression. We must recognize and head off the risks now.”

Beginning in 2017, PEN America raised the alarm about disinformation in a prescient landmark report, Faking News, warning about threats – including to the rise of strongman politics and the management of public health crises  that were seen then as “far-fetched,” but now reflect reality.

The piecemeal and hotly contested efforts by tech companies and social media platforms to counter disinformation ”may in retrospect look like a mere rehearsal for more disruptive threats posed by generative AI,” the white paper says.

Generative AI is arriving at precisely the moment when key social media platforms like Facebook and Twitter/X have drastically cut staff responsible for online trust and safety.

In 2019, PEN America’s Losing the News report documented the collapse of local journalism in the U.S. and consequences for democracy and civic trust, an unforeseen result of the rise of the internet and social media that also highlights the importance of anticipating what implications generative AI may have on the information ecosystem as a whole.

Among other concerns related to free expression raised, PEN America included the following:

  • Generative AI could further complicate the economic challenges facing journalism, and shrink the pool of jobs in the field, upon which democracy relies. In addition, the technologies could make the creation of fraudulent news sources easier, and the sites themselves more convincing.
  • Generative AI potentially blurs the lines between what is real, what is a human creation, and what is machine-generated, putting at risk the protection and ownership of ideas and posing a potential threat to the livelihoods of writers and artists.
  • The tools are making it cheaper and easier to wield more sophisticated and convincing disinformation campaigns for those with malign intent, as they are also becoming harder to detect. Online abuse campaigns–particularly those waged by governments against their critics–often rely on disinformation and generative AI can be harnessed to catapult disinformation to new levels.
  • Efforts to address the threats posed by generative AI risk censorious or chilling tactics either deliberately, with governments specifically censoring how people can use generative AI, or by using AI’s threats as an excuse to enact new restrictions on expression.
  • Research findings suggest that generative AI tools could be wielded–or weaponized– to manipulate opinions and skew public discourse via subtle forms of influence on their users. AI chatbots designed to reflect a particular ideology could further entrench existing cultural and political echo chambers.

Recent efforts by the White House and Congress to address issues posed by generative AI technologies include the Biden Administration’s securing of voluntary commitments from AI companies for safe, secure, transparent development of these new technologies, Senate Majority Leader Chuck Schumer’s SAFE Innovation framework (“security, accountability, protecting our foundations, and explainability”) is meant to guide comprehensive AI-focused legislation following a May Congressional hearing on AI that raised concerns such as harmful content, disinformation, racial bias, and a lack of transparency.

In its recommendations to policymakers, PEN America urges a regulatory approach that incorporates both risk considerations and fundamental rights, noting the importance of–and overlap between–the approaches. Observing what it views as a “false dichotomy” between either a “rights-based” approach to regulation, such as the Biden Administration’s Blueprint for an AI Bill of Rights, or a “risk-based” approach, such as the European Union’s AI Act, the white paper states: “Not only is the rights vs. risk framing unnecessary, some of those documents that purport to fall into one camp or the other are actually both.”

The white paper lays out additional guiding principles for policymakers and industry with regard to this emerging technology. Among these are:

  • Consultations with human rights advocates, scientists, academics, and other experts to to craft workable policy solutions and ensure regulations support free expression, speech, creativity, and innovation
  • Rather than being fixed in perpetuity or requiring consensus to update, regulations should ensure flexibility, such as regular review and adaptation to respond to technological change
  • Regulators should seek to ensure transparency and access for researchers to algorithms, data sources and uses, and other mechanics of AI technologies
  • Prioritizing fairness and equity to reduce bias and move closer to building trustworthy systems. Companies can advance these priorities by ensuring AI models are designed and built by diverse teams
  • Security and privacy should provide the foundation for AI system development and deployment with practices such as regular audits or surveys to detect anomalies, protection against attacks by third-parties, and encryption benchmarks to be met.
  • When AI is used to automate decision-making, for example in content moderation, search engine results, or other cases, appeals and remedy options that are accessible and effective must also exist with automated process(es).
  • The business models that will drive the spread of generative AI are only now being invented and refined; as these mechanisms emerge and before they become entrenched, it will be essential to rigorously assess how they shape AI-driven content and discourse.

About PEN America

PEN America stands at the intersection of literature and human rights to protect open expression in the United States and worldwide. We champion the freedom to write, recognizing the power of the word to transform the world. Our mission is to unite writers and their allies to celebrate creative expression and defend the liberties that make it possible. To learn more visit PEN.org

Contact: Suzanne Trimel, [email protected], 201-247-5057