The Hidden Brain Damage of AI
A new study shows how physicians using AI systems are suffering a measurable decay in skills. Are the solutions hiding in plain sight?
After just three months of AI assistance, experienced endoscopists, physicians who’d spent years honing their ability to spot potentially cancerous polyps, became significantly worse at their jobs. The study, published in Lancet Gastroenterology & Hepatology, found this deskilling persisted even after controlling for every variable the researchers could think of.
This study is a preview of the mass cognitive fragmentation we are all experiencing as we offload more and more to AI, threatening the very foundation of human thinking and core capacities we need to flourish. All while we accumulate the evidence of how to avoid doing so and still choose otherwise.
Today, physicians are held legally responsible for their clinical decisions, even when they use or rely on AI. As I dive into in this substack post, existing legal frameworks assume that physicians retain their skill and judgment to override AI recommendations, even when they may no longer be able to do so. Some scholars describe this as making physicians “liability sinks“ to absorb legal responsibility for AI decision-making. (The concept of “moral crumple zones” from robotics applies well here, where humans absorb blame when the system fails, even as the system strips them of the capacity to prevent that failure.) When AI systems that outperform physicians become the standard of care—and they will—physicians will be legally required to use them. But using them causes skill decay, which increases dependence, which causes more decay. It’s a vicious cycle that ends with physicians as little more than legal shields for AI systems, bearing responsibility without capability.
The first malpractice case where a physician’s AI-induced skill atrophy contributes to patient harm will unleash a tsunami of litigation. Unlike malpractice cases of the past where physicians can point to their training and experience, these cases will expose an uncomfortable truth that the tools meant to enhance our capabilities are systematically undermining them.
Your Brain Is Physically Shrinking
When we stop using cognitive skills the regions responsible for those skills in our brain shrink. Neuroscientists have shown that brain regions expand with learning and practice but without continued use they atrophy.
Studies of muscle disuse show measurable brain volume decreases when functions aren’t used. The same principle applies to cognitive functions, where without regular challenges, the hippocampus shrinks, the prefrontal cortex atrophies, and the neural pathways that support complex reasoning degrade. The endoscopists in the Lancet study are most likely already suffering from changes to their visual pattern recognition areas in their brains compared to three months ago.
A meta-analysis of AI-induced deskilling in medicine makes clear this is a systemic crisis. The researchers identify multiple forms of skill erosion already occurring from diagnostic deskilling as physicians lose their ability to form differential diagnoses without AI assistance, and even basic clinical skills when AI-driven tests replace hands-on assessment.
Perhaps more alarming is what researchers have called epistemic sclerosis, the ossification of medical knowledge itself. As AI systems reinforce existing diagnostic patterns, they risk freezing our knowledge in time. To innovate, we must be able to challenge prevailing wisdom, to see what doesn’t fit past patterns. Physicians can’t challenge what they can no longer understand, putting future progress in medicine and human health at risk.
Cognitive degradation from relying on AI applies to all of us, not just physicians. A recent study with 666 participants found that frequent AI tool use has a “significant negative correlation” with our critical thinking abilities. Perhaps unsurprisingly, younger users who haven’t had the benefit of learning without AI as long as adult learners showing the most severe impacts. The cognitive crisis for humanity has already begun.
The Solution We’re Ignoring
The good news is that we are beginning to see how we can prevent this cognitive degradation. A pre-print study posted last on arXIV week showed how designing AI to challenge the user to think through an issue can improve human decision-making and critical thinking skills. When the study participants interacted with AI that presented opposing viewpoints (what the researchers called “stance-balanced” AI) they showed better objective performance and lower cognitive bias than those using conventional AI chatbots that didn’t challenge them.
The idea is a simple and elegant one, and one that applies across societal domains. Disagreement forces more thoughtful engagement. When AI challenges our assumptions rather than confirming them (just like when we are exposed to a diversity of human viewpoints), we engage the very neural circuits that passive AI allows to atrophy.
This aligns well with what learning scientists call “desirable difficulties“—the cognitive challenges that promote deep learning and robust skill development. Current AI design reduces these difficulties, essentially making us cognitively obese from our lack of mental exercise.
But instead of implementing solutions like these, we’re racing to deploy AI systems that maximize efficiency at the cost of human capability. We’re choosing cognitive convenience over cognitive preservation.
Our cognitive capacities are fundamental to our mental self-determination. When an AI system degrades our cognitive abilities, they violate our cognitive liberty.
If a chemical company released a product that caused gradual brain damage, we would be outraged, filed class-action lawsuits, and demand regulatory oversight. Especially if children were at even more risk than adults. And yet, even as studies stack up that companies developing AI systems are causing cognitive fragmentation in human intelligence, we are standing idly by, using their products and participating in the mass deskilling of humanity.
Perhaps a starting place would be to put warning labels on AI systems, so that we enter cognitive diminishment with open eyes:
Keep reading with a 7-day free trial
Subscribe to Thinking Freely with Nita Farahany to keep reading this post and get 7 days of free access to the full post archives.