We’re Training Students To Write Worse To Prove They’re Not Robots, And It’s Pushing Them To Use More AI
About a year and a half ago, I wrote about my kid's experience with an AI checker tool that was pre-installed on a school-issued Chromebook. The assignment had been to write an essay about Kurt Vonnegut's Harrison Bergeron-a story about a dystopian society that enforces equality" by handicapping anyone who excels-and the AI detection tool flagged the essay as 18% AI written." The culprit? Using the word devoid." When the word was swapped out for without," the score magically dropped to 0%.
The irony of being forced to dumb down an essay about a story warning against the forced suppression of excellence was not lost on me. Or on my kid, who spent a frustrating afternoon removing words and testing sentences one at a time, trying to figure out what invisible tripwire the algorithm had set. The lesson the kid absorbed was clear: write less creatively, use simpler vocabulary, and don't sound too good, because sounding good is now suspicious.
At the time, I worried this was going to become a much bigger problem. That the fear of AI cheating" would create a culture that actively punished good writing and pushed students toward mediocrity. I was hoping I'd be wrong about that.
Turns out... I was not wrong.
Dadland Maye, a writing instructor who has taught at many universities, has published a piece in the Chronicle of Higher Education documenting exactly how this has played out across his classrooms-and it's even worse than what I described. Because the AI detection regime hasn't just pushed students to write worse. It has actively pushed students who never used AI to start using it.
This fall, a student told me she began using generative AI only after learning that stylistic features such as em dashes were rumored to trigger AI detectors. To protect herself from being flagged, she started running her writing through AI tools to see how it would register.
A student who was writing her own work, with her own words, started using AI tools defensively-not to cheat, but to make sure her own writing wouldn't be accused of cheating. The tool designed to prevent AI use became the reason she started using AI.
This is the Cobra Effect in its purest form. The British colonial government in India offered a bounty for dead cobras to reduce the cobra population. People started breeding cobras to collect the bounty. When the government scrapped the program, the breeders released their now-worthless cobras, making the problem worse than before. AI detection tools are our cobra bounty. They were supposed to reduce AI use. Instead, they're incentivizing it.
And this goes well beyond one student's experience. Maye describes a pattern spreading across his classrooms:
One student, a native English speaker, had long been praised for writing above grade level. This semester, a transfer to a new college brought a new concern. Professors unfamiliar with her work would have no way of knowing that her confident voice had been earned. She turned to Google Gemini with a pointed inquiry about what raises red flags for college instructors. That inquiry opened a door. She learned how prompts shape outputs, when certain sentence patterns attract scrutiny, and ways in which stylistic confidence trigger doubt. The tool became a way to supplement coursework and clarify difficult material. Still, the practice felt wrong. I feel like I'm cheating," she told me, although the impulse that led her there had been defensive.
A student praised for years for being an exceptional writer now feels like a cheater because she had to learn how AI detection works in order to protect herself from being falsely accused. The surveillance apparatus has turned writing talent into a liability.
Then there's this:
After being accused of using AI in a different course, another student came to me. The accusation was unfounded, yet the paper went ungraded. What followed unsettled me. I feel like I have to stay abreast of the technology that placed me in that situation," the student said, so I can protect myself from it." Protection took the form of immersion. Multiple AI subscriptions. Careful study of how detection works. A fluency in tools the student had never planned to use. The experience ended with a decision. Other professors would not be informed. I don't believe they will view me favorably."
The false accusation resulted in the student subscribing to multiple AI services and studying how the detection systems work. Not because they wanted to cheat, but because they felt they had no other option for self-defense. And then they decided to keep quiet about it, because telling professors about their AI literacy would only invite more suspicion.
Look, I get it: some students are absolutely using AI to cheat, and that's a real issue educators have to deal with. But the detection-first approach has created an incentive structure that's almost perfectly backwards. Students who don't use AI are punished for writing too well. Students who are falsely accused learn that the only defense is to become fluent in the very tools they're accused of using. And the students savvy enough to actually cheat? They're the ones best equipped to game the detectors. The tools aren't catching the cheaters-they're radicalizing the honest kids.
As Maye explains, this dynamic is especially brutal at open-access institutions like CUNY, where students already face enormous pressures:
At CUNY, many students work 20 to 40 hours a week. Many are multilingual. They encounter a different AI policy in nearly every course. When one professor bans AI entirely and another encourages its use, students learn to stay quiet rather than risk a misstep. The burden of inconsistency falls on them, and it takes a concrete form: time, revision, and self-surveillance. One student described spending hours rephrasing sentences that detectors flagged as AI-generated even though every word was original. I revise and revise," the student said. It takes too much time."
Just like my kid and the school-provided AI checker, Maye's student spent a bunch of wasted time revising" to avoid being flagged.
Students spending hours rewriting their own original work-work that they wrote-because an algorithm decided it sounded too much like a machine. That's time taken away from studying, working, caring for family, or, you know, actually learning to write better.
Learning to revise is a key part of learning to write. But revisions should be done to serve the intent of the writing. Not to appease a sketchy bot checker.
What Maye articulates so well is that the damage here goes beyond false positives and wasted time. The deeper problem is what these tools teach students about writing:
Detection tools communicate, even when instructors do not, that writing is a performance to be managed rather than a practice to be developed. Students learn that style can count against them, and that fluency invites suspicion.
We are teaching an entire generation of students that the goal of writing is to sound sufficiently unremarkable! Not to express an original thought, develop an argument, find your voice, or communicate with clarity and power-but to produce text bland enough that a statistical model doesn't flag it.
The word devoid" is too risky. Em dashes are suspicious. Confident prose is a red flag.
My kid's Harrison Bergeron experience was, in retrospect, a perfect preview of all of this. Vonnegut warned about a society that forces everyone down to the lowest common denominator by handicapping anyone who shows ability. And here we are, with AI detection tools functioning as the Handicapper General of student writing, punishing fluency, penalizing vocabulary, and training students to sound as mediocre as possible to avoid triggering an algorithm that can't even tell the difference between a thoughtful essay and a ChatGPT output.
Maye eventually did the only sensible thing: he stopped playing the game.
Midway through the semester, I stopped requiring students to disclose their AI use. My syllabi had asked for transparency, yet the expectation had become incoherent. The boundary between using AI and navigating the internet had blurred beyond recognition. Asking students to document every encounter with the technology would have turned writing into an accounting exercise. I shifted my approach. I told students they could use AI for research and outlining, while drafting had to remain their own. I taught them how to prompt responsibly and how to recognize when a tool began replacing their thinking.
Rather than taking a guilt-first" approach, he took one that dealt with reality and focused on what would actually be best for the learning environment: teach students to use the tools appropriately, not as a shortcut, and don't start from a position of suspicion.
The atmosphere in my classroom changed. Students approached me after class to ask how to use these tools well. One wanted to know how to prompt for research without copying output. Another asked how to tell when a summary drifted too far from its source. These conversations were pedagogical in nature. They became possible only after AI use stopped functioning as a disclosure problem and began functioning as a subject of instruction.
Once the surveillance regime was lifted, students could actually learn. They asked genuine questions about how to use tools effectively and ethically. They engaged with the technology as a subject worth understanding rather than a minefield to navigate. The teacher-student relationship shifted from adversarial to educational, which is, you know, kind of the whole point of school.
That line Maye uses: these conversations were pedagogical in nature" keeps sticking in my brain. The fear of AI undermining teaching made it impossible to teach. Getting past that fear brought back the pedagogy. Incredible.
This piece should be required reading for every educator thinking that catching" students using AI is the most important thing.
As Maye discovered through painful experience, the answer is to stop treating AI as a policing problem and start treating it as an educational one. Teach students how to write. Teach them how to think critically about AI tools. Teach them when those tools are helpful, when they're harmful, and when they're a crutch. And for the love of all that is good, stop deploying detection tools that punish good writers and push everyone toward a bland, algorithmic mean.
We are, quite literally, limiting our students' writing to satisfy a machine that can't tell the difference. Vonnegut would have had a field day.