The Tutor They Liked, Until They Knew
A new University of Cincinnati study found that students rated an AI chatbot's answers higher than their professor's, until they were asked to guess which response came from the machine. The reversal tells us where the work of trust really happens.
This month, researchers at the University of Cincinnati's College of Nursing published a small study with a finding worth sitting with. Seven doctoral students submitted statistics questions tied to their capstone projects. Each question came back with three blind responses. One was written by their professor, one by a graduate assistant, one by a tailored AI chatbot. The students rated each response for helpfulness and satisfaction without knowing the source.1
Then, separately, the researchers asked them to guess which response had come from the chatbot.
The students rated the chatbot's responses the highest. Yet when asked to identify the bot, they reliably pointed at the response they had liked least. The answer they preferred became, the moment they suspected its presence, the one they quietly distrusted.2
The lead author, biostatistician Joshua Lambert, calls the finding evidence that user trust has become one of the most important variables in AI research. The paper, titled "Blinded But Biased," appeared in the Journal of Nursing Education in February.2 What we believe about a tool, it turns out, changes what we accept from it, even when the tool itself has not changed.
The Reversal We Should Take Seriously
This is the quiet problem AI in education is going to spend the next decade working through. Students do not trust the bot, even when they prefer its work. Teachers do not trust the bot, even when it lightens their workload. Parents do not trust the bot, even when their children's scores rise. The numbers can be encouraging while the relationship stays sour. Between May and December 2025, the share of American students using AI for homework climbed from 48 percent to 62 percent. Over the same months, the share who said AI is harming students' critical thinking climbed from 54 percent to 67 percent.3 They use it. They worry about it. They do not endorse it.
The instinct of most edtech vendors is to respond to that skepticism with marketing. Better demos, clearer labels, more careful language. But the UC study suggests the issue is not labeling. It is deeper than that. It is what we cannot see.
What Trust Actually Is
When a student receives a thoughtful answer from a teacher, they have a sense of how that answer was produced. They know the teacher has expertise. They know the teacher has thought about them as a particular learner. The trust in the response is not really about the response. It is about the relationship that produced it.
The same response from a chatbot lacks every piece of that scaffolding. It comes from somewhere the student cannot see, generated by a process they do not understand, by an entity that knows nothing of them as a person. The substance of the answer may be identical. The trust is not.
Trust does not flow from outputs. It flows from visible processes. We trust the doctor whose reasoning we follow, the colleague whose decisions we have watched unfold over years. The fabric of trust is not what something produces. It is what we can see of how it got there.
Designing for Visibility
Most AI in education today is built backwards on this. The product is the output. The model is a black box. The student is shown the answer and told to either accept it or be skeptical, with very little material to work with in deciding which. No wonder they default to skepticism the moment the box gets named.
The OECD's 2026 Digital Education Outlook noted a pattern that points the other way. Students given general-purpose chatbots produced higher-quality work, but the advantage often vanished the instant the tool was taken away. Educational AI built with intentional pedagogical purpose, the kind that makes reasoning legible to both student and teacher, produced gains that lasted.4 The two cases look similar from a distance. Up close, they are entirely different.
A different design pattern is starting to emerge in classrooms willing to try it. Instead of hiding the AI's work, you reveal it. The student sees the chain of reasoning. The teacher sees the conversation history, the pause before a question, the moment the student pushed back and the tutor adjusted, the drafts that came before the final version. The artifact of learning is no longer a single deliverable. It is a record of process, in which both the student and the machine are visible.
This is the project we are working on at Koan. Aidan, our AI tutor, does not vanish into a black box. Every conversation lives alongside the student's writing. Every revision, pause, and breakthrough is preserved. The point is not surveillance. It is that trust requires visibility, and visibility requires a different design from the one most AI products were born with.
The UC researchers found a paradox. Students preferred the bot's work, until they knew it was the bot's. The way out is not to disguise the bot better. It is to give students enough visibility into how the bot works that the knowing no longer reverses the preference.
If we trust other people because we can see how they think, what would it take for our students to trust the machines we have placed in their classrooms?
References
UC nursing study: Students prefer chatbots in advising — until they know it's AI
University of Cincinnati News · April 2026
Blinded But Biased: Students Prefer Chatbot Until They Know It Is One
Journal of Nursing Education · February 2026
More Students Use AI for Homework, and More Believe It Harms Critical Thinking: Selected Findings from the American Youth Panel
RAND Corporation · March 2026
OECD Digital Education Outlook 2026
OECD · 2026
Sources cited in order of appearance. Click any inline number to jump.