All essays
Future of EducationAI in SchoolsLearning Visibility

The Screen Didn't Fail Us. We Failed the Screen.

What a Fortune investigation and a Harvard study reveal about the real problem with technology in classrooms

March 20, 20265 min readKoan Team

This week, Fortune published a sweeping investigation with a headline designed to sting: "America's Math and Reading Scores Tanked After Schools Ditched Textbooks for Screens." The piece draws a direct line from 25 years of edtech adoption to a generation that neuroscientist Jared Cooney Horvath told Congress is "the first in modern history to be less cognitively capable than their parents." A Brookings report cited in the article warns that AI is accelerating a "great unwiring" of students' brains through cognitive atrophy and dependency.

It is a grim portrait. And it is not wrong.

But it is incomplete.

Two Studies, One Paradox

At nearly the same time, a Harvard randomized controlled trial published in Nature Scientific Reports found something that seems to contradict the Fortune narrative entirely: students using a custom-built AI tutor learned significantly more, in less time, than students in active-learning classrooms taught by experienced instructors. They also reported feeling more engaged and more motivated.

So which is it? Is technology making students dumber, or smarter?

The answer, as any good teacher will tell you, is that the question is wrong.

The Tool Is Never the Lesson

The Fortune investigation documents what happens when schools treat technology as the lesson plan itself. A district buys tablets. A state mandates a platform. Screens replace textbooks. But the pedagogy stays the same, or worse, disappears entirely. Students consume content passively. They click through modules. They generate polished outputs with chatbots. The appearance of learning replaces the real thing.

The Harvard study documents what happens when the opposite is true: when AI is built around sound pedagogical principles from the ground up. The tutoring system in the study was not a generic chatbot turned loose on a syllabus. It was designed to replicate the very practices that make great teaching great: scaffolded questioning, formative feedback, spaced retrieval. It guided students through the process of understanding rather than handing them the product of someone else's understanding.

The difference is not about screens versus textbooks. It is about whether the technology makes thinking visible or invisible.

The Invisibility Problem

Here is what both stories share, though neither quite names it: the crisis in education is not a technology crisis. It is a visibility crisis.

When a student submits a polished essay, the teacher sees a product. Was it written at 2 AM in a panic? Was it drafted and revised over a week of careful thought? Was it generated wholesale by an AI and lightly reworded? In most learning management systems, these three scenarios look identical. The submission arrives. The grade goes back. The process, where all the real learning happened, or didn't, remains invisible.

This is the problem that screens did not create but certainly amplified. When everything moves faster and every output can be machine-polished, the gap between appearing to learn and actually learning widens into a chasm.

What the Future Actually Needs

The schools that will thrive in the next decade will not be the ones that ban AI or the ones that adopt it uncritically. They will be the ones that use it to illuminate what was always hidden: the messy, nonlinear, deeply human process of figuring something out.

Imagine a classroom where a teacher can see not just that a student submitted a strong thesis, but that she revised it three times after a series of probing questions from an AI tutor. Where the teacher can see the five minutes a student paused to reconsider an assumption. Where the data does not just show a grade but shows a thinking trajectory, evidence that cognitive effort actually happened.

This is what we are building at Koan. Not another screen. Not another platform. A way to make learning visible.

Our AI tutor, Aidan, does not give students answers. It asks them questions, Socratic questions tailored to their rubric, their history, their patterns of thinking. And every revision, every pause, every shift in reasoning is captured. Not as surveillance, but as evidence. Evidence that a teacher can use to coach better. Evidence that a school can use to prove that learning is happening. Evidence that a student can look at and say, "I did that. I thought my way through it."

The Real Question

The Fortune investigation asks whether we should have given students screens. The Harvard study asks whether AI can teach. But the question that matters most is quieter, and more important:

Can we build technology that makes the invisible work of learning something a student, a teacher, and a school can finally see?

The answer, we believe, is yes. But only if we stop treating technology as either savior or villain and start treating it as what it has always been: a mirror. The question is what we point it at.

We can point it at outputs, at submissions and scores and polished final drafts. Or we can point it at process, at the thinking, the struggle, the revision, the growth.

The screen did not fail us. We just never asked it to show us the right thing.

Koan Learn — AI That Teaches Students to Think