Readers are furious after discovering blatant ChatGPT prompts left inside published romance novels, exposing what critics call a “lazy fraud” perpetrated on paying customers.
The controversy erupted when eagle-eyed fans spotted revision notes that read unmistakably like AI instructions embedded directly in the published text of books by authors K.C. Crowne and Lena McDonald. The discoveries quickly spread across Reddit, Goodreads, and Bluesky, igniting a firestorm of reader outrage.
The Smoking Gun: “Rewrite This in J. Bree’s Style”
The most damning evidence came from a since-deleted passage in chapter three of one novel that read: “I’ve rewritten the passage to align more with J. Bree’s style, which features more tension, gritty undertones, and raw emotional subtext beneath the supernatural elements.”
This wasn’t just proof of AI usage—it was evidence of deliberately attempting to copy another living author’s distinctive voice. J. Bree is a popular romantasy author with a devoted fanbase, and readers were incensed that AI was being instructed to mimic her style.
“This isn’t just lazy, it’s theft,” one Goodreads reviewer wrote. “They’re not just using AI to write—they’re using AI to plagiarize the feel of other authors’ work.”
Authors Double Down Instead of Apologizing
Rather than issuing apologies, both authors have publicly defended their use of artificial intelligence, further inflaming the controversy. This defiant stance has alienated readers who might have been willing to forgive an honest mistake.
The publishing community remains divided. Some argue that AI assistance is no different from hiring ghostwriters or editors. Others contend that selling AI-generated content as original human creative work constitutes fraud.
Amazon’s AI Slop Problem
This scandal highlights a growing crisis on Amazon’s Kindle platform, where AI-generated books are flooding the marketplace at an alarming rate. Human authors report their work being buried in search results by waves of hastily-produced AI content.
Some AI-generated books have been caught impersonating real authors, using similar names or copying cover styles to confuse readers. Others contain nonsensical passages, factual errors, or—as in this case—leftover prompts that reveal their artificial origins.
The 45% Problem
A recent survey of over 1,200 authors revealed a startling statistic: approximately 45% admit to using some form of generative AI in their work, whether for writing, marketing, or creating cover illustrations.
This has created a trust crisis between authors and readers. How can readers know if the book they’re purchasing was written by the human whose name appears on the cover?
What This Means for Readers
For now, readers must be vigilant. Warning signs of AI-generated content include:
- Unusually prolific authors releasing multiple books per month
- Generic, repetitive prose lacking distinctive voice
- Factual inconsistencies within the same book
- Oddly formal or stilted dialogue
- Reviews mentioning “AI” or “ChatGPT”
The romance and romantasy communities, where this scandal originated, are now actively sharing lists of suspected AI authors and calling for Amazon to implement stricter disclosure requirements.
Industry Response
Major publishers have begun adding “human-written” guarantees to their marketing, attempting to differentiate their traditionally-published titles from the flood of AI content in self-publishing. Some literary agents now require authors to sign declarations that their manuscripts were not generated by AI.
Whether Amazon will take action to address the problem—or continue profiting from the volume of AI-generated uploads—remains to be seen. For now, the burden falls on readers to protect themselves from what many are calling the biggest fraud in publishing history.
Recommended Resources
The Elements of Style – $9.95
The classic writing guide for clarity and style.
On Writing Well – $15.99
Essential guide to nonfiction writing.
As an Amazon Associate, we earn from qualifying purchases.