The adoption numbers are striking. In 2025, 85% of K-12 teachers and 86% of students reported using AI — figures that have climbed sharply year over year. Schools are integrating it into lesson planning, homework, tutoring, and college prep. The technology is already inside classrooms in ways that weren’t true 18 months ago. The research on what that actually does to learning has not kept pace.
That gap is now drawing serious concern from educators and researchers who say the classroom AI experiment is running ahead of any evidence that it’s working — and accumulating early signs that it may not be.
What Happens When You Take the AI Away
The most striking data point comes from Stanford economist Guilherme Lichand, whose 2026 study tracked what happened when students who had been using AI were told they could no longer access it. Those students subsequently performed worse than peers who had never used AI at all. Not worse than before — worse than students who had gone the whole time without it.
The finding points to a dependency effect: students who offloaded cognitive work to AI didn’t just fail to build skills — they may have actively eroded them. Learning, researchers note, is not optimized for efficiency. It requires effort, struggle, and productive friction — precisely the things AI is designed to eliminate.

Adoption Without Understanding
Despite near-universal use, the infrastructure around AI in schools remains thin. According to RAND Corporation, only 35% of school district leaders reported providing students with any AI training in 2025. Only 45% of principals said their school had any policies or guidance on AI use at all. The technology arrived in classrooms faster than anyone had time to think through the implications.
Teachers are feeling that gap in real time. A 2025 report from the Center for Democracy and Technology found that 71% of K-12 teachers say it is now hard to tell whether student work is actually the student’s own. Seventy percent say they worry AI is weakening critical thinking and research skills. And according to Stanford’s Human-Centered AI Institute, only 6% of K-12 teachers believe AI tools do more good than harm in education.
The Brookings Warning
The Brookings Institution’s 2026 report on AI and K-12 education concluded that the risks of generative AI in schools currently outweigh its benefits — a position that stands in contrast to the enthusiasm with which many districts have embraced it. The risks cited include weakened student-teacher relationships, data privacy vulnerabilities, and student safety concerns. Real-time data from school monitoring firm Securly found that roughly 1 in 50 student-AI interactions flagged indicators of potential self-harm or dangerous behavior.
The mental health dimension is not hypothetical. Recent cases have involved students who sought emotional support from AI chatbots and self-harmed. Schools that integrated AI tools for academic use didn’t necessarily anticipate those tools becoming de facto mental health resources.
The Case for Slowing Down

A growing segment of educators is pushing back — not against AI entirely, but against its unreflective adoption. The Conference on College Composition and Communication recently passed a resolution supporting teachers’ right to refuse AI in their classrooms, framing it as an academic freedom issue. More than 1,000 education professionals signed an open letter last year describing the push to integrate AI as “a massive marketing effort” unsupported by evidence of learning gains.
The core concern isn’t that AI has no place in education — it’s that schools are making that determination before the research exists to inform it. A 2025 Harvard study found that human tutors could read student emotional states with 92% accuracy; the most advanced AI tutoring systems managed 68%. The gap matters in a context where connection and engagement are primary drivers of whether students actually learn.
The experiment is already underway. What’s less clear is whether anyone will notice the results before they become irreversible.