I Set a Trap to Catch My Students Cheating With AI and the Results Were Shocking
hubie writes:
Students are not just undermining their ability to learn, but to someday lead:
I have been in and out of college classrooms for the last 10 years. I have worked as an adjunct instructor at a community college, I have taught as a graduate instructor at a major research institution, and I am now an assistant professor of history at a small teaching-first university.
Since the spring semester of 2023, it has been apparent that an ever-increasing number of students are submitting AI-generated work. I am no stranger to students trying to cut corners by copying and pasting from Wikipedia, but the introduction of generative AI has enabled them to cheat in startling new ways, and many students have fully embraced it.
Plagiarism detectors have and do work well enough for what I might call "classical cheating," but they are notoriously bad at detecting AI-generated work. Even a program like Grammarly, which is ostensibly intended only to clean up one's own work, will set off alarms.
So, I set out this semester to look more carefully for AI work. Some of it is quite easy to notice. The essays produced by ChatGPT, for instance, are soulless, boring abominations. Words, phrases and punctuation rarely used by the average college student - or anyone for that matter (em dash included) - are pervasive.
But there is a difference between recognizing AI use and proving its use. So I tried an experiment.
A colleague in the department introduced me to the Trojan horse, a trick capable of both conquering cities and exposing the fraud of generative AI users. This method is now increasingly known (there's even an episode of "The Simpsons" about it) and likely has already run its course as a plausible method for saving oneself from reading and grading AI slop. To be brief, I inserted hidden text into an assignment's directions that the students couldn't see but that ChatGPT can.
I assigned Douglas Egerton's book "Gabriel's Rebellion," which tells the story of the thwarted rebellion of enslaved people in 1800, and asked the students to describe some of the author's main points. Nothing too in-depth, as it's a freshman-level survey course. They were asked to use either the suggestions I provided or to write about whatever elements of Egerton's argument they found most important.
I received 122 paper submissions. Of those, the Trojan horse easily identified 33 AI-generated papers. I sent these stats to all the students and gave them the opportunity to admit to using AI before they were locked into failing the class. Another 14 outed themselves. In other words, nearly 39% of the submissions were at least partially written by AI.
The percentage was surprising and deflating. I explained my disappointment to the students, pointing out that they cheated on a paper about a rebellion of the enslaved - people who sacrificed their lives in pursuit of freedom, including the freedom to learn to read and write. In fact, Virginia made it even harder for them to do so after the rebellion was put down.
I'm not sure all of them grasped my point. Some certainly did. I received several emails and spoke with a few students who came to my office and were genuinely apologetic. I had a few that tried to fight me on the accusations, too, assuming I flagged them as AI for "well written sentences." But the Trojan horse did not lie.
There's a lot of talk about how educators have to train students to use AI as a tool and help them integrate it into their work. Recently, the American Historical Association even made recommendations on how we might approach this in the classroom. The AHA asserts that "banning generative AI is not a long-term solution; cultivating AI literacy is." One of their suggestions is to assign students an AI-generated essay and have them assess what it got right, got wrong or if it even understood the text in question.
Read more of this story at SoylentNews.