All essays
AI in SchoolsLearning VisibilityAI DetectionFuture of Education

What Cleared Her Was the Process

A fifteen-year-old in North Carolina was failed for AI use by three detectors that all agreed. Her Google Doc revision history disagreed. The case shows where proof of learning actually lives.

April 30, 20265 min readKoan Team

Eleanor Canina is fifteen, a freshman at Green Hope High School in Cary, North Carolina. Earlier this month she handed in a writing assignment on the first act of Romeo and Juliet. It came back with a 0 and a note from her English teacher: "evidence of AI, Please redo." The teacher had run the work through three different AI detectors. The results came back at 62 percent, 75 percent, and 87 percent likelihood of AI generation.1

Eleanor had not used AI. She is a heavy reader with the vocabulary of a heavy reader. Her mother, a sociology professor at NC State, suspects that one phrase may have helped tip the detectors: "the titular character."2

What cleared Eleanor's name was not a fourth detector returning a more flattering number. It was her Google Doc revision history. Her mother asked the school to look at the writing process itself, the long string of saved keystrokes that show a person thinking. A second teacher pulled up the revisions, watched the document grow word by word, and gave the original assignment a 100.1

The Detection Arms Race Is the Wrong Race

AI detection is an industry built on guessing, and the guesses are not good. In one widely cited study, detectors flagged about 1 to 2 percent of writing by U.S. students as AI generated, while flagging 61 percent of TOEFL essays the same way.3 Non-native English speakers, neurodivergent writers, and unusually structured students get caught in the same nets. The guess is bad, and the guess is biased.

North Carolina's own Department of Public Instruction has been telling districts this since 2024. The state's AI guidebook tells educators to "use great caution with AI detectors" because they "have proven not to be dependable" and "should never be used as the only factor when determining if a student cheated."4 After Eleanor's case became public, the Wake County district reiterated that it does not provide or require detection tools. Instead, the district said, teachers should "rely on multiple measures, such as reviewing a student's writing process and work history."1

That sentence is the whole next decade of education, hidden in one line of district communication. The proof of learning is the process. Not the artifact. The process.

The Receipts We Already Have

For thirty years, the digital tools students use have been quietly recording how they work. Google Docs holds a revision history of every keystroke. Word tracks edits. Even simple text editors save versions. The forensic record exists. The trouble is that almost no school has built the workflow to see it.

Most teachers have hundreds of papers to grade. They cannot scrub through keystroke timelines on every assignment. So when they suspect something is off, they reach for the only tool the market has handed them, a detector that returns a percentage and a vibe. The percentage looks impressive. The vibe is sometimes wrong. A real student gets a zero and has to fight, with the help of a parent who happens to be a professor, to be believed.

Imagine a different design. Imagine that the writing process were as visible to a teacher as the final paper, presented not as a forensic dump but as a readable narrative. Every revision, every long pause before a new paragraph, every moment the student deleted a sentence and tried again. Imagine that AI use, when it happened, was visible in the same record, with the same fidelity. The question would no longer be "did she or didn't she?" It would be "what did she actually do?"

What Visibility Changes

This is the design we are pursuing at Koan. Aidan, our AI tutor, lives next to the student's writing instead of inside a black box. Every conversation, every revision, every pause and breakthrough is captured, not as surveillance, but as the evidence base of learning. A teacher who has seen the process does not need a detector to tell her whether her student understands Romeo and Juliet. She has watched the student work through it.

The deeper shift is what counts as proof. For most of the modern era, the artifact of learning has been the final draft. The essay, the test, the polished output. The artifact was always a performance, a moment cut from a longer story. AI did not invent the problem. It just made the limits of the artifact undeniable. When the final draft is no reliable signal of who did what, you finally have to ask what the signal actually is. The signal, it turns out, has always been the process.

Eleanor's case ended well. She got her 100, and she is now circulating a petition asking her school to stop relying on detectors that produce false positives.1 But most students who get flagged do not have a parent who teaches research methods at a Tier I university. Most cases never get a second teacher. They end with a quiet zero and a quiet doubt that trails the student for years.

If the proof of learning has always lived in the process, what would it mean to finally make the process the thing we look at?

References

  1. Wake student vindicated after she says she was falsely accused of using AI

    The News & Observer (via Yahoo News) · April 2026

  2. North Carolina Student Fights Accusation of AI Use

    Government Technology · April 2026

  3. AI Detection Tools Falsely Accuse International Students of Cheating

    The Markup · August 2023

  4. NCDPI releases guidance on the use of artificial intelligence in schools

    NC Department of Public Instruction · January 2024

Sources cited in order of appearance. Click any inline number to jump.

Koan Learn — AI That Teaches Students to Think