Be wary of AI detectors

This post is a response to yet another incident where a content creator used an AI detector to “call out” a writer, this time for supposedly using AI-generated book covers. It’s part of a growing trend: creators, readers, and reviewers running content through AI detection tools and treating the results as indisputable proof of wrongdoing.

Yes, in this particular case, there’s a damned good chance the covers in question are AI-generated. The author in question, Lena McDonald, absolutely did leave an AI prompt in the final published version of her book Darkhollow Academy : Year 2. But even then, the way people are rushing to judgment using these unreliable “tools,” which I hesitate to call tools at this point, is deeply troubling.

AI detectors are built on guesses. That guess might be based on texture in an image, sentence structure in a paragraph, or simply how “natural” something appears to the model. But that’s all it is: a guess.

I’ve personally run a still image from legendary filmmaker Hayao Miyazaki’s Howl’s Moving Castle through an AI image detector and…

 

Miyazaki is very, VERY well known to be INCREDIBLY against AI.  Howl’s Moving Castle is also from 2004.  There is zero chance it’s AI.

I once uploaded a some photos of my daughter to see what an AI detector would say…

 

They, too, were flagged as AI.  Thanks, I suppose, for thinking my daughter is too beautiful to be real…

She is quite lovely, but she’s also real, and I took these photos two weeks ago in our front yard.

A dragon image I was working on for a book I’ve abandoned (I’ve got far too many manuscripts from the past five years I’m already editing and don’t need to be starting a new series right now):

So…which is it?  The answer is human, and I know because I made it using Procreate, but how did one detector some back with two drastically different results for the exact same image?

These aren’t isolated mistakes. They’re evidence of a systemic problem with how these detectors work.

The biggest issue isn’t that AI detectors are wrong.  No.  It’s how people use them. When creators or influencers wave around the results like they’re proof of guilt, we’re no longer having a conversation about AI. We’re having a witch hunt, which I recently wrote about.

This culture of accusation is just as problematic as the use of AI itself. Creators who value human work, storytelling, and originality should absolutely be concerned about AI use. But weaponizing faulty detectors to root out “offenders” only breeds fear, distrust, and division. It creates an environment where originality and authenticity are constantly questioned, especially if the work happens to look “too good” or “too polished,” though sometimes even if it looks “too bad.” Which is it, people? Are we fucked either way?

As AI is getting better, it mimics human creativity more convincingly, and the supposed “tells” of each one are now so varied that it’s nearly impossible for someone with any writing skill to write in a way that removes absolutely every “tell” of all detectors without removing their voices so severely that they’re left with the most generic of text, which is also being used to lob accusations of AI. Meanwhile, AI detectors get jumpier, mistaking poetic, evocative, or stylistically unique work as synthetic. The result is a harmful feedback loop: creators push boundaries and get accused, others hold back out of fear, and the creative landscape becomes warped by suspicion. Anything unfamiliar or non-mainstream gets labeled “fake, ”not necessarily because it is, but because it doesn’t fit neatly into the narrow patterns these detectors are trained to recognize, even though they are so faulty that it’s really just wild guessing.

This isn’t a defense of Lena McDonald. Honestly, I hesitate to even try to make a determination on her covers since I recognize that I am biased against her due to the absolute fact that she does use AI, not only as evidenced in her book, but in her own words. I’m calling for better judgment on when to make accusations. AI detectors are as reliable as crystal balls, and reality is, someone using AI for text doesn’t always mean using AI for covers, and vice versa. Accusations should be made carefully, with actual evidence, not based solely on the opinion of an algorithm or because something is good or bad, or because a person used AI for a different media type, but especially based on these horrid detectors that otherwise intelligent people are giving a lot of weight.

So let’s not trade one problem for another. When the “tools” we use to “protect” creativity start undermining it, we all lose.  And right now?  We’re losing, and that’s pretty devastating.

Leave a Reply

Your email address will not be published. Required fields are marked *

search previous next tag category expand menu location phone mail time cart zoom edit close