“Miss!” my student wailed, “my story is showing up as AI on all the checkers, but it’s not AI. I swear I wrote it!”
Forgive me if I immediately doubted her. The story was due that day and, from what I could tell, she’d spent at least as much time in class playing on her phone under her desk as she had actually writing. I also knew that she was very grades-driven, often sharing her results with friends as they measured their success and, I think, some of their self-worth by the numbers on their assignments. Others in the class might value learning over grades, but with her… I wasn’t sure.
Her friend made a face. “I hate those AI checkers,” she muttered, and she glared at me, as if daring me to say that her friend’s work might be faked. I knew she had just gone through a big blow up with another teacher over work that was flagged as AI. She had been extremely upset and threatened to drop the class (which would have harmed her far more than the teacher). Her guidance counsellor, her friends and I had talked her down, but it had been a near-run thing. And honestly? I figured she probably had used AI and was just mad that she got caught. Again, a nice kid, but very grade-focused.
I pulled up the first student’s story and glanced at it. My heart sank. The first paragraph was really good. “Don’t worry,” I said, even though I was worried, “I’ll check over your process work and trust that over machines.”
Her body relaxed with relief, and she slid into her seat. Her friend huffed again. “I wish every teacher would do that.” I wondered if I was being taken for a ride.
What is this constant battle doing to us? I thought, and not for the first time. I hate that dashes – something I’ve spent years teaching students to use well – now make me suspicious. I hate that excellent work now immediately has me turning to AI checkers even though they are wildly unreliable. I hate that I spend time doubting my students’ integrity. The constant suspicion is eroding something in me, eating away at some social contract that I’m not willing to give up.
Last year, my own child’s teacher told the class which AI checkers she was going to use and what percentage of “probably AI” was acceptable. (Was it 12%? 8? Is it weird that some percentage is ok? And that we don’t know what that percentage is?) My child largely did his own writing, although as I helped him, I realized that avoiding AI entirely for his generation is not unlike my generation avoiding plagiarism entirely: there are surprisingly complex layers to it. Still, he did the work: researched and wrote, wrote and researched. Then, he put his writing through the AI checker, and it invariably came back as more AI than was acceptable. His solution? He put his work through a “humanizer” AI until the AI detector showed that it was human.
I couldn’t decide if I should laugh or cry.
After school, I opened up the story that my student may or may not have written. The first paragraph really was excellent, but part way through, I started to see some pretty typical errors – little punctuation mistakes, wording that read like a 17-year-old rather than a computer. Still, just to be safe, I used a couple of AI checkers. They were all over the place. Useless. I checked her version history and her early drafts. Everything I had suggested that this was her work, so I proceeded with that in mind.
Still, I wonder what we lose as we learn the steps to this new, complex dance. Time, for sure. My students (and my child) check their work in various checkers, humanize their work and then turn it in. Then we teachers check the work in various checkers, look at version histories or keystroke trackers and grade it. Every minute we spend using these machines is time we could have spent writing or giving feedback or talking. We also lose a sense of trust – the students no longer trust themselves and heaven knows teachers don’t trust them. I have accused students of using AI who maybe haven’t, and I’ve definitely missed some who have. That, too, is a problem because the bar for acceptable writing is changing. I recently found some of my old college papers, and I don’t want to shock you, but they were imperfect. Now students have spellcheck and grammar checks, and enough kids are able to submit work that is AI enhanced that it’s easy for teachers to expect higher levels of “correctness” than are, perhaps, reasonable. It’s easy to get used to the glib, polished prose that AI generates and to see that as the goal even when we know that it’s not.
And what of my student? Was I influenced by my early concerns as I graded her story? I hope not, but it can be hard to let go of that early whiff of “cheating.” I think I did right by her, but I know I took on some mental load to do that. It’s all so much.
I have more to say about this, but I’m still typing largely one-handed because of my stupid broken wrist – and I’m not using AI to make up for my injury, so it will have to wait for next week. For what it’s worth, her story wasn’t perfect, but it was interesting and emotional. She got an A.



