The novel, about a desperate young woman who is held hostage by a man she met online and forced to live as his pet, was self-published in February 2025. The book quickly found an audience among horror fans, and Hachette published it in the United Kingdom last fall and planned to release it in the United States this spring, billing it as “an unapologetic, visceral revenge horror novel.”
Earlier this year, Max Spero, the founder and chief executive of Pangram, an A.I. detection program, heard of the claims about “Shy Girl” and decided to run a test of the full text. Its results indicated that the book was 78 percent A.I. generated.
“I’m very confident that this is largely A.I. generated, or very heavily A.I. assisted,” said Spero, who posted his research on X in January.
The Times also analyzed passages from the novel using several A.I. detection tools and found recurring patterns characteristic of A.I. generated text, like gaps in logic, excessive use of melodramatic adjectives and an overreliance on the rule of three.
In the months since “Shy Girl” was released in Britain, more readers voiced their suspicions online that the writer relied on A.I., citing nonsensical metaphors and odd, repetitive phrasing. As a chorus of allegations built online in late January that the novel was A.I. generated, Hachette stayed silent.
In response to questions from The New York Times about the A.I. allegations against “Shy Girl,” Hachette told The Times that its imprint Orbit has canceled plans to release the novel in the United States and that Hachette will discontinue its U.K. edition.
The author of “Shy Girl,” Mia Ballard, who according to her author bio writes poetry and lives in Northern California, has very little social media presence, and doesn’t appear to have addressed the allegations of A.I. use on her feeds. In an email to The Times late on Thursday night, Ballard denied using A.I. to write “Shy Girl,” contending that an acquaintance she hired to edit the self-published version of the novel had used A.I.
The decision to cancel the publication came after a lengthy and thorough analysis, Hachette’s spokeswoman said, noting that the company values human creativity and requires authors to attest that their work is original. Hachette also asks its authors to disclose whether they are using A.I. to the company.
“Shy Girl” appears to be the first commercial novel from a major publishing house to be pulled over evidence of A.I. use. Its cancellation is a sign that A.I. writing is not only appearing in cheap self-published e-books that are flooding Amazon but is seeping into even traditionally published fiction.
The stunning fact that “Shy Girl” got so far into the editorial process, and was even released in the U.K. before publishers thoroughly investigated the claims of A.I. use, is a sign of how unprepared many in the book world are to deal with the rise of A.I. It also signals the dawn of an uncertain new era for the book world, as editors and readers alike are increasingly left wondering whether the prose they are reading was written by a human or a machine. [...]
The Times also analyzed passages from the novel using several A.I. detection tools and found recurring patterns characteristic of A.I. generated text, like gaps in logic, excessive use of melodramatic adjectives and an overreliance on the rule of three.
In the months since “Shy Girl” was released in Britain, more readers voiced their suspicions online that the writer relied on A.I., citing nonsensical metaphors and odd, repetitive phrasing. As a chorus of allegations built online in late January that the novel was A.I. generated, Hachette stayed silent.
In response to questions from The New York Times about the A.I. allegations against “Shy Girl,” Hachette told The Times that its imprint Orbit has canceled plans to release the novel in the United States and that Hachette will discontinue its U.K. edition.
The author of “Shy Girl,” Mia Ballard, who according to her author bio writes poetry and lives in Northern California, has very little social media presence, and doesn’t appear to have addressed the allegations of A.I. use on her feeds. In an email to The Times late on Thursday night, Ballard denied using A.I. to write “Shy Girl,” contending that an acquaintance she hired to edit the self-published version of the novel had used A.I.
The decision to cancel the publication came after a lengthy and thorough analysis, Hachette’s spokeswoman said, noting that the company values human creativity and requires authors to attest that their work is original. Hachette also asks its authors to disclose whether they are using A.I. to the company.
“Shy Girl” appears to be the first commercial novel from a major publishing house to be pulled over evidence of A.I. use. Its cancellation is a sign that A.I. writing is not only appearing in cheap self-published e-books that are flooding Amazon but is seeping into even traditionally published fiction.
The stunning fact that “Shy Girl” got so far into the editorial process, and was even released in the U.K. before publishers thoroughly investigated the claims of A.I. use, is a sign of how unprepared many in the book world are to deal with the rise of A.I. It also signals the dawn of an uncertain new era for the book world, as editors and readers alike are increasingly left wondering whether the prose they are reading was written by a human or a machine. [...]
For now, the most obvious disruptions from A.I. are hitting the self-publishing sphere, where authors say the ecosystem has been flooded with A.I. slop. But some in the industry believe that it’s only a matter of time before more books written with A.I. slip past editors at major houses. The technology has become increasingly widespread — as has the practice of picking up self-published books and rereleasing them through traditional imprints.
“It’s not merely inevitable,” said Thad McIlroy, a publishing industry consultant who has urged publishers to clarify their policies around the technology. “We’re in the midst of it.” [...]
Many publishers don’t explicitly prohibit authors from using A.I. in their book contracts. Instead, they rely on longstanding contractual clauses that require writers to affirm that their work is “original,” which many people in the book business now interpret as effectively banning the use of A.I. for text or image creation.
Publishers are also wary of A.I. content because currently, A.I.-generated text and art can’t be protected by copyright. Still, given the widespread uses for A.I. during research, outlining and other parts of the writing process, there’s little clarity on what constitutes its appropriate use. Many in the industry worry that publishers are leaving themselves vulnerable to scammers — or even writers who believe their A.I. use doesn’t cross any lines.
One problem in regulating authors’ A.I. use is that most corporate publishing houses don’t want to ban it outright. Editors recognize that authors use A.I. in a range of ways short of writing with it. And publishing executives want to ensure that their employees can use the technology for tasks like creating marketing copy, audio narration and translation.
The fact that publishing companies generally haven’t drawn a hard line around A.I. use is sowing confusion about what is permissible. Could a novelist ask A.I. to suggest plot twists, propose an alternate ending or polish a draft and still claim it as original work? At what point does the work stop being human?
by Alexandra Alter, NY Times | Read more:
Image: George Wylesol
[ed. I guess I'm of two minds on this. If the writing eventually becomes so good that it's indiscernable from a human-produced product (or even better) why should it be banned? Authors and publishing houses have a right to be concerned, but should they be treated any differently from other professions (programmers being an example) that are facing the same threat? Is it because they occupy a so-called creative space? How long will that last? I can imagine an AI producing very high quality material: fiction, non-fiction, screenplays, poetry, advertising copy, etc. because it can draw upon hundreds of years of examples, criticism, reviews, college courses, awards and whatever else is out there to discern patterns, storylines, jokes, whatever, that have proven to produce the highest impact and success. So what to do? The only thing I can think of is labeling: highlighting what's AI produced and what's not and letting the market decide its worth. Many people might actually prefer AI - along the lines of craft brews vs. Bud Light. Who knows? Another option would involve updating copyright laws, but that would require Congress to actually do something, which as we all know is pretty much a non-starter. Just another example of all the disruption that's been predicted occurring in real time.]