When the AI assistant Claude went down last month, something unexpected happened in software development circles: engineers couldn’t code. Tasks they had been completing fluently for months suddenly felt difficult — not because the tasks had changed, but because the tools that had been doing much of the underlying cognitive work had disappeared. One developer, describing the experience online, said he’d have to “write code like a caveman.”

He was joking. But the researchers studying this phenomenon are not.

The Cognitive Debt Problem

The term researchers are using is “deskilling” — the gradual erosion of foundational skills that occurs when AI handles the cognitively demanding parts of a job so reliably that people stop practicing them. It’s not a hypothetical risk. It’s a documented pattern with a name, and it’s accelerating.

Rebecca Hinds, head of the Work AI Institute at Glean, describes two possible outcomes depending on how people use AI tools. Used intentionally — particularly in areas where a worker already has deep expertise — AI creates a “cognitive dividend,” handling routine tasks and freeing up time for higher-order thinking. Used reflexively as a shortcut, it creates “cognitive debt”: people become faster and more productive in the short term while quietly losing the underlying skills that made them effective in the first place.

The distinction is in whether AI supports thinking or replaces it. And for a growing number of workers, the answer is the latter.

The Risk Is Sharpest for Early-Career Workers

The deskilling risk is most acute for people just entering the workforce. Junior roles have traditionally functioned as training grounds — the place where new workers learn to break down messy problems, debug what’s broken, and defend their reasoning when challenged. Those experiences build the intuition and judgment that experienced professionals rely on. When AI handles the same tasks that used to build those skills, early-career workers may develop fluency — the ability to produce polished outputs — without ever developing the underlying competence.

Hinds notes that AI can create the illusion of expertise. It’s becoming harder to tell, she says, where the worker’s knowledge ends and the technology begins. That ambiguity isn’t just a management problem. For the worker themselves, it may not become visible until the tool disappears — and suddenly the task that felt routine is no longer possible without assistance.

The developer who struggled when Claude went down wasn’t failing at something new. He was discovering a dependency that had formed without him noticing.

It’s Not Just Tech Workers

The pattern extends well beyond software development. Anthropic’s own economic research published earlier this year found that AI tends to handle the higher-skill components of many jobs — the tasks requiring more education and judgment — leaving behind the more routine elements. Travel agents who use AI for complex itinerary planning may find the skill for that work atrophying. Technical writers who outsource research and synthesis may gradually lose the analytical instincts those tasks once built.

The same dynamic plays out anywhere AI is used as a default rather than a deliberate tool: legal research, financial analysis, marketing strategy, medical documentation. The outputs look the same. The cognitive infrastructure that produced them quietly decays.

The Intentional Use Framework

The researchers are not arguing against AI use — they’re arguing for deliberate AI use. The cognitive dividend is real, and the productivity gains are documented. The question is whether workers are treating AI as a thought partner or a thought replacement.

Some organizations that recognized their AI enthusiasm may have gone too far are now creating structured space for employees to practice problem-solving without assistance — not as a Luddite gesture, but as skill maintenance. The analogy is a surgeon who uses robotic assistance for most procedures but still practices foundational techniques to maintain the manual competence the robot depends on.

The most practical framing comes from the researchers themselves: treat AI like an assistant for administrative and routine tasks, freeing up time and attention for the complex judgment calls that require genuine human expertise. Reserve the hard thinking for yourself. Use AI to do more of the easy work, not less of the hard work.

That distinction — easy vs. hard, not fast vs. slow — is where the cognitive dividend lives.

Skip to content