This morning, I saw my children off to their first day of kindergarten and fifth grade. On Monday, I start a new semester of teaching AI Law and Policy and Biometrics, Identity, and Policy at Duke Law School. With education on my mind, I stumbled across this new pre-print posted yesterday on arXiv, which left me wondering about what lies ahead for learning. Is it a classroom where AI reads students’ brainwaves, detects when they are confused, and an AI instructor who adjusts its lessons in real time?
Let me back up. I’ve been following with interest the spread of Alpha School, backed and championed by Bill Ackman. It’s a radical experiment in education where children spend just two hours daily on core subjects, taught by AI that adapts to their individual pace of learning. Your child is an advanced first grader? No problem, the AI will adjust to teach them 3rd grade materials. With new campuses opening this fall (including here in North Carolina), color me intrigued. But as a professor, I’m left wondering if AI will replace traditional teaching. And whether teachers will really be relegated to student “guides” with AI instructing the next generation.
I take comfort in the fact that this isn’t our first encounter with “revolutionary” educational technology. In the 1960s, B.F. Skinner promoted his “teaching machines” to personalize learning through programmed instruction. In the early 1980s, computers were going to revolutionize classrooms. The 2010s brought us MOOCs that would democratize education. Each wave promised transformation, and each came up short in doing so. Is the Alpha School approach any different? Maybe. This time it’s not just about how we deliver information. It’s about what is delivering it.
Cue the pre-print I stumbled across yesterday, coming out of MIT Media Lab, UC Berkeley, and Princeton. In my present frame of mind, I quickly started thinking about how their findings might find their way into Alpha School campuses (and other schools, workplaces, and our everyday life). In the study, Detecting Reading-Induced Confusion Using EEG and Eye Tracking, the researchers used EEG and eye-tracking with AI processing to detect when a research participant was confused while reading. Their multi-modal model scored about 77% in testing accuracy across participants, with the best case being about 89.6% in distinguishing confused from not confused.
They had 11 adult participants read 300 paragraphs, some that were deliberately confusing and others that required specialized knowledge. Using 64-channel lab-based EEG and wearable eye-tracking glasses, they captured the N400 brain signal (a telltale spike 400 milliseconds after encountering something that doesn’t add up in the brain (incongruence)), together with eye-tracking for gaze patterns. They built AI models that could then interpret those signals to tell when a participant was lost while reading.
While this was an early feasibility study, not a real-time deployment of the technology, it’s easy to imagine a future where developments from this study and experiments like the Alpha School converge. Alpha’s AI “teachers” could be coupled with children using wearable devices that monitor their real-time confusion, all with the AI systems adapting teaching and content to address that confusion.
Think of a second grader learning about photosynthesis. Her EEG and eye-tracking data reveals confusion at the section on “chloroplasts.” In real-time her screen changes to introduce a 3D animation showing sunlight entering a leaf and then zooming into the cells to make the abstract concept more concrete for her. Meanwhile the student next her gets an animation of “Clara Chloroplast” running a solar panel factory, because his wearable data showed no confusion when the section on chloroplasts was delivered.
Could this revolutionize education for students with ADHD, dyslexia, or autism? Could a system recognize when a dyslexic student’s brain is working overtime, and then automatically adjust the font, spacing, or perhaps switch to audio to keep them apace in their learning? Could we design systems that detects the pre-meltdown stress patterns in a child with autism, and implement changes to the environment that would help modulate their stress?
Of course, anyone who’s watched a kindergartener try to keep a mask on for eight hours (my now-fifth grader wore a mask in school for the entirety of kindergarten), knows that “wearable EEG for children” might be unrealistically optimistic. Can we really expect five-year-olds to keep sensors properly positioned? What happens when little Tommy discovers he can make his confusion levels spike by thinking about dinosaurs instead of fractions? And will this create a new digital divide where wealthy students get brain-optimized learning while others get yesterday’s worksheets?
The Price of Progress
Sooner, rather than later, we will have to address the convergence of AI and wearable devices in classrooms. And ensure they bring a future of enhancement rather than creepy surveillance. The rose-colored glasses view of the future is about democratizing education where every student gets precisely what they need, when they need it. And learning differences are seamlessly accommodated.
The dystopian future is one where children’s thinking patterns become data streams for schools and corporations. Where every moment of confusion, sign of interest, or pattern of struggling with content is mapped, stored and analyzed by AI systems. Where future employers know your cognitive weaknesses before you walk in the door. Insurance companies price premiums based on your learning struggles and cognitive development beginning at age seven. Where mental privacy becomes normalized away in the name of learning efficiency. And perhaps most troubling for the future of human thinking, where AI resolves every moment of confusion instantly, such that the productive struggle that builds critical thinking disappears.
I’m guessing that my now-kindergarten-aged daughter will graduate from high school into a world where “reading someone’s mind” isn’t a metaphor but a common methodology. Which makes me wonder whether we are doing enough to prepare her for that world.
As I gear up for teaching on Monday, I’m struck by the irony that I will be teaching future lawyers to grapple with questions like these, using traditional Socratic dialogue, classroom debates, and through productive struggle. Maybe that’s telling. Or maybe I’m just another educator about to be made obsolete, clutching my dry-erase markers while the future streams past me in neural signals.
First- very much enjoying your work......So- an observation. First- I think we all learn differently- and maybe what may look like confusion is simply a reassembly of our previous framing of our understanding of a concept. You can literally see people "looking inward" to try to understand things- that's not necessarily confusion. Finally- we put tremendous resources into getting computers to be more human- and here we are, I fear, having computers attempting to get us to be more like them in the way we learn......this may be just a step toward Ray Kurzweil's "Singularity".
Progress is an illusion with technology. Old vacuum tube radios sounded better, but the tubes broke easily. Commerical A.I. is control and domination presented as help. Anything that seems easy, anything that is pushed on you, and anything presented as a panacea is most assuredly a path to subservience and slavery.