“I have not written and will not be writing a novel called Nightshade Market,” Min Jin Lee posted on several social media platforms on Tuesday. “Thank you.”
The author of Free Food for Millionaires and Pachinko was responding to news that the book she didn’t write was included in a “Summer Reading List for 2025” published May 18 by the Chicago Sun-Times and a few days earlier by The Philadelphia Inquirer.
She was in good company. The “summer reading list” piece, which has since been removed from the Sun-Times website, ostensibly recommended 15 beach reads, and included classics like Ray Bradbury’s Dandelion Wine, and Bonjour Tristesse by Francoise Sagan.

But 10 of the recommended reads don’t exist, though the authors they were attributed to certainly do.
In addition to Lee, the list recommended nonexistent books by Isabel Allende, Andy Weir, Brit Bennett, Taylor Jenkins Reid, Rebecca Makkai, Rumaan Alam, Maggie O’Farrell, Delia Owens and 2025 Pulitzer Prize winner Percival Everett. It even includes blurbs describing each of the phoney books, and notes on why they were recommended.
“WTAF,” Makkai posted on social media. “I did not write a book called Boiling Point.”
The list, unsurprisingly, was generated by artificial intelligence.
Social media posts responding to the fiasco alternated between rage and mockery. “If you call yourself a writer but you are using AI to complete your story, you aren’t writing, you’re playing MadLibs,” novelist S.A. Cosby posted. “And don’t compare AI to Spellcheck. That’s like comparing a Terminator to a dictionary.”
The incident may seem comparatively silly given the steady drumbeat of negative news these days, but the consequences of major news outlets publishing fabricated material is no laughing matter, especially at a time when public trust in the Fourth Estate is shaky at best.
In today’s polluted information ecosystem, any such misstep risks compounding public confusion and weakening journalism’s credibility. An avoidable blunder tied to AI misuse doesn’t just damage a few outlets’ reputations, but chips away at journalism’s broader ability to inform the public and hold power to account.
The Sun-Times acknowledged the debacle, publishing an explanation that the story appeared in a special section dubbed, “Heat Index,” which was produced by syndicator King Features, a unit of Hearst. It ran both in print and online.
“To our great disappointment, that list was created through the use of an AI tool and recommended books that do not exist,” the explanation said. “We are actively investigating the accuracy of other content in the special section.” The paper also pledged to update its policies “so that all our third-party licensed editorial content comply with our journalistic standards.”
The incident came two months after the paper, which became part of Chicago Public Media in 2022, lost 20% of its staff to a buyout offer. The supplement also appeared in a “handful” of other papers, King Features said, including the Inquirer. The Inquirer said using AI to create content in the 56-page feature section was a “violation of our own internal policies and a serious breach.”
“We are looking at ways to improve the vetting of content in these supplements going forward,” said Inquirer editor Gabriel Escobar.
The independent news site 404 Media, which was first to report the mess, noted that at least one other story in the section also appeared to be created using AI, pointing to a piece about hammocks that used sources that could not be verified. And The Guardian reported that a story headlined “Summer food trends” quoted a purported Cornell University food anthropologist named Catherine Furst, a name that does not appear on the university’s staff directory.
The Chicago Sun-Times Guild, the union that represents journalists in the newsroom, posted a scathing response: “We take great pride in the union-produced journalism that goes into the respected pages of our newspaper and on our website. We’re deeply disturbed that AI-generated content was printed alongside our work.”
Artificial intelligence presents new opportunities for cash-strapped newsrooms, offering tools that can support journalists by automating repetitive tasks, enhancing data investigations, and verifying visual content. But this technology also comes with risks at a time when the news industry is already under financial strain and struggling to maintain public trust.
As newsrooms shrink and resources dwindle –in large part because of earlier waves of digital disruption that gutted advertising revenue and transformed how audiences consume news– some outlets are turning to AI as a cost-saving measure. When deployed without proper human oversight, AI can introduce errors, amplify misinformation, and undermine audience confidence.
It’s certainly not the first time that once-respected outlets have been caught using erroneous content created with AI. Sports Illustrated, for example, was outed in 2023 for publishing AI-generated stories with fake bylines. Months before that, the tech news website CNET was called out for its stories with “very dumb errors” produced by AI.
Responsible newsrooms are using AI to augment, not replace, human reporting. The most effective use of these tools supports journalism’s core values of speed, accuracy, depth, and public service, while grounded in editorial judgment and transparency.
Freelance writer Marco Buscaglia took responsibility for the book list in a Facebook post.
“A really stupid error on my part,” he said. “I do use AI for background when I work on certain stories and try to vet the information as carefully as possible to make sure it’s accurate. In this case, I missed this content entirely and the story ran as is.”
“I am completely at fault here—just an awful oversight and a horrible mistake,” said Buscaglia, who King Features said they will no longer work with. “I hate that this reflects poorly on journalists who do such great work and make sure their stories are accurate. I feel horrible about misleading readers with bad information.”
The apology seems sincere, but the damage is done.
This unforced error might have been the result of a shortcut, but it still undermines the credibility that journalism depends on. This is a stark reminder that while AI offers innovative tools for creating content, media organizations must use diligence and uphold editorial standards. With trust in the news media at historic lows, this responsibility is especially important amid a growing flood of misinformation. Sadly, this also represents a missed opportunity for talented authors to have their actual work featured.