NYC Wrote the Rules. But They Missed the Question.
New York City's new AI guidelines for schools focus on what teachers should allow. The deeper issue is what technology should reveal.
On Monday, the New York City Department of Education released its preliminary AI guidance for the city's 1,800 public schools. The document arrives nearly three years after the city's brief and widely criticized ban on ChatGPT, and it reads like a city that has learned at least one lesson: pretending AI does not exist is not a strategy.
The guidance is careful, measured, and incomplete in exactly the way you would expect from a system responsible for 1.1 million students. It affirms that AI is not a replacement for teachers. It calls for public feedback through May 8. It gestures toward a more comprehensive playbook expected by June. It is, in many ways, a reasonable first step.
But it leaves the most important question unasked.
What the Guidelines Get Right
Credit where it is due: New York is doing what most districts have not. It is writing policy. In a landscape where 86% of education organizations report using generative AI but most lack formal guidance, the act of putting rules on paper matters. Schools were already making ad hoc decisions in every classroom, every day. As Manhattan Assistant Principal Joe Vincente told Chalkbeat, his school was holding multiple meetings a week with students suspected of using AI inappropriately, with no city framework to guide them.
The guidelines also acknowledge a truth that too many policies dodge: AI is already embedded in the tools students use. It is not a separate category anymore. It is in the word processor, the search engine, the homework helper on every phone. Policy that pretends AI is something you can cordon off into a single app or platform is already obsolete before it is published.
What the Guidelines Miss
The document focuses, understandably, on guardrails. Privacy protections. Age-appropriate use. The role of teachers as decision-makers. These are necessary. They are also insufficient.
Here is what is missing: any framework for determining whether learning is actually happening.
This is not a small gap. It is the gap. When a student uses AI to help write an essay, the current paradigm gives schools exactly two options: trust that the student learned something, or try to detect whether the student cheated. Neither option answers the question that matters. Did the student think? Did they struggle productively? Did they revise their ideas in response to friction? Did they grow?
The NYC guidelines, like most AI policies emerging across the country, operate within a framework designed for a world where the final submission was a reasonable proxy for the learning process. That world is gone. A polished essay can now be produced in thirty seconds by anyone with a browser. The product tells you almost nothing about the process. And the process is where the learning lives.
The Visibility Gap
This is not a criticism specific to New York. It is a structural problem in how we think about education technology. The OECD's Digital Education Outlook 2026 documented what happens when AI does the thinking for students: a 48% performance boost that collapses into a 17% deficit the moment AI is removed. The researchers call it "metacognitive laziness." Students perform better with the tool and worse without it. The tool carried them. It did not teach them.
But the same research found that purpose-built Socratic AI tools produced sustained gains. Not temporary boosts. Real, persistent improvements in critical thinking. The difference is not in the power of the model. It is in the design of the interaction. Tools that ask questions build thinking. Tools that give answers replace it.
This is where policy needs to go next: not just "Is AI allowed?" but "Is the AI making thinking visible or invisible?"
What Would Better Look Like
Imagine a version of the NYC guidelines that included this principle: any AI tool used in schools should make the student's thinking process more visible, not less. That single criterion would change everything. It would rule out general-purpose chatbots used as essay mills. It would favor tools built on pedagogical foundations. It would give teachers something they desperately need: evidence of what happened between the assignment prompt and the final submission.
This is the work we pursue at Koan. Our AI tutor, Aidan, does not generate content for students. It asks them questions. Socratic questions, calibrated to the rubric, adapted to the student's patterns. And every revision, every pause, every shift in reasoning is captured in the WorkHub. Not as surveillance, but as a timeline of thinking. A teacher reviewing a student's work can see not just what they submitted, but how they arrived there. The three drafts. The moment they reconsidered a weak argument. The five-minute pause before a breakthrough.
This is not a feature. It is a philosophy. Learning that cannot be seen cannot be supported, cannot be assessed, and cannot be trusted.
A Question for the Playbook
The NYC Department of Education has promised a more comprehensive AI playbook by June. That is a genuine opportunity. The city has scale, influence, and the attention of every school district in the country. What New York decides will ripple outward.
So here is the question worth adding to the public comment period, the one that could shift the conversation from rules about AI to principles for learning:
If we cannot see the process by which a student arrived at their work, how can any policy, no matter how carefully written, tell us whether they learned?