(This is the sixth in a series on generative AI content.)
As generative AI becomes more widespread, a troubling phenomenon is taking root: a rise in false accusations against artists and writers. More and more, real people are being accused of using AI-generated content simply because their work contains certain phrases, structures, or styles that some associate with machine output. But the reality is more complicated—and more disturbing—than it looks on the surface.
Many of these accusations are made by individuals who use AI themselves. They’ve seen how generative systems tend to work and can recognize patterns common in AI-generated content. The problem? These same individuals often don’t understand that the reason AI uses those patterns is because humans used them first. Generative AI is trained on human-created work—academic writing, published articles, books, essays, even social media posts. So yes, it reflects human habits of expression. That doesn’t mean all structured, well-written content is AI. It just means AI has learned how people actually write.
Unfortunately, this confusion—combined with mounting fear and mistrust—has given rise to a culture of suspicion, one fueled by another deeply flawed element in this equation: AI detectors.
AI Detectors Are a Crapshoot
AI detection tools are often treated as conclusive proof of machine authorship. In reality, they are anything but. These tools are wildly inconsistent, unreliable, and opaque in their methods. Many rely on superficial signals like word frequency, sentence length, “perplexity,” or “burstiness”—jargon that sounds technical but fails to grasp the nuance of real writing. Worse still, many of the markers they flag as “AI-like” are things found in everyday academic writing, journalism, and even personal essays: proper grammar, formal phrasing, transitional sentences, and domain-specific terminology.
English-language learners are particularly vulnerable to false positives. These writers often rely on structured templates, academic tone, and grammatically precise phrasing—exactly the things detectors are trained to flag. So now, writers who’ve worked hard to improve their English are being punished for their success, accused of letting a machine do the work simply because they’ve learned to sound polished.
This has led to an even more disturbing trend: demands for “proof of humanity.” Some people are now asking writers to hand over their Google Docs or Microsoft Word revision histories, as if writing should come with forensic evidence. Not only do many writers not use these platforms (or have those features turned on), but this kind of surveillance mindset is a massive violation of privacy. It assumes guilt, and forces people to defend themselves against faulty technology and corporate paranoia.
And all of this? It’s often built around a tool made by a for-profit company with no obligation to be accurate, fair, or accountable.
AI Users Need to Be Transparent
A big part of this growing mistrust comes from a lack of transparency from those who do use AI. Many hide or minimize the role AI plays in their work. Some even lie outright, presenting machine-generated content as entirely human-made. This dishonesty creates an atmosphere of confusion and hostility. It erodes trust. And it gives false accusers more fuel for their fire.
If using generative AI truly isn’t something to be ashamed of—as many of its defenders claim—then there should be no hesitation in being honest about it. Transparency won’t solve every problem, but it could start normalizing responsible use and reduce the stigma. Instead, the refusal to disclose AI usage only heightens tension and makes it harder for genuine creators to be seen and believed.
Witch Hunts Don’t Protect Art—They Undermine It
On the flip side, some people opposed to generative AI have become so vigilant that they’ve started turning their suspicion against fellow creatives. The intention—to preserve and protect human-made art—is understandable. But the methods are becoming harmful.
False accusations don’t stop the misuse of AI. If anything, they push people toward using it. Creators who are constantly doubted, interrogated, and dismissed for being “too polished” or “too fast” may eventually give up trying to avoid AI—especially if the punishment is the same whether or not they’re actually using them. Why not take the shortcut, some might think, if the assumptions won’t change either way?
This kind of scorched-earth approach only serves to weaken the very creative community it claims to defend.
It’s Time for Accountability—Not Accusation
The solution isn’t more suspicion. It’s more accountability. AI users need to be open about how and when they use these tools. Detector tools must be called out for what they are: unreliable, biased, and often harmful. And communities need to reject the instinct to interrogate their members like criminals for writing well, creating quickly, or being influenced by common trends.
We are in the middle of a fundamental shift in what it means to create. And in moments like this, it’s easy to let fear drive us apart. But if we truly care about protecting art, then we need to start by protecting artists—especially from each other.
The witch hunt needs to end. The distrust must be redirected—toward the systems that are exploiting human work, not the people still trying to make it.