Inside My AI Law & Policy Class (Class #1)
Every Monday & Wednesday this fall, I’m opening my classroom doors. Consider yourself enrolled.
Welcome to Duke Law (You + My Actual Students)
Starting today, you’re enrolled to get real-time dispatches from my AI Law & Policy course at Duke Law School. In addition to my regular posts about digital technologies and your brain, every Monday and Wednesday I’ll share what just happened in class. The revelations, the confusion, the “aha” moments, and yes, actual legal education.
Think of it as auditing Duke Law. For the price of a cup of coffee.
Today’s Opening Discussion: “What Is AI?”
I gave my students 90 seconds to write down their definition of artificial intelligence. Try it yourself right now. Seriously, grab a pen or type it out on your computer.
Done?
When they compared definitions, there was interesting convergence and divergence. “Software that learns.” “Machines that think.” “Advanced pattern recognition.” No two matched.
So my question to you and to them is, how do we write laws for something we can’t even define?
The Understanding Wars:
The readings I assigned also included two completely contradictory views:
Geoffrey Hinton (the “Godfather of AI”): “They definitely understand. They are intelligent.”
The ‘Stochastic Parrot’ authors (including the fabulous Emily Bender): “They’re just predicting the next most likely word based on statistics.”
And just last week, Mustafa Suleyman posted about “seemingly conscious AI,” arguing that we should be building AI for humans, not to seem like humans.
And then we discussed why this might matter. If Hinton’s right, we might need to consider AI rights, AI testimony, AI liability. If the critics are right, it’s just software, a tool like any other, where humans bear full responsibility for its use. And Mustafa Suleyman would then be right, that we should be designing those tools to serve us, not to fool us.
The Steering Wheel Problem
Helen Toner (yes, from the OpenAI board drama) offers a fascinating metaphor in her TED talk, arguing that governing AI is like “driving down a road with unexpected twists and turns.”
Her point is that we don’t need perfect foresight to govern AI, we need a clear windshield (transparency into what AI systems are doing), good steering (adaptive governance that can respond quickly), and working brakes (kills switches and safety measures.
But imagine driving when the road keeps morphing before your eyes (AI capabilities are changing monthly), your windshield is fogged over (we have a “black box” problem), and there are different passengers claim you are in a different vehicle or not in a vehicle at all (the definitional problem).
That’s AI governance in 2025.
Your Mini Law School Lesson: Key Concepts
Since you’re now effectively a Duke Law student, here’s a few definitional things you should know after today:
GANs (Generative Adversarial Networks): Think of it like two AI systems that fighting each other, with one creating fake content and the other one trying to detect it as fake or real. They train and iterate with each other until the fakes are indistinguishable from the real one. And as you might guess, this introduces all sorts of legal nightmares, as courts increasingly encounter AI-generated evidence that, if the model succeeds, should be indistinguishable from real evidence.
The “Black Box” Problem: AI makes decisions, but no one, not even its creators, can explain why and how it made its decisions. This is the “black box” where you input training data, there is a lot happening under the hood, and all we see is the output. Now, imagine a judge using an AI for determining the sentence of a criminal offender, and while relying upon it cannot explain the reasoning about why or how the system arrived at the chosen sentence or parole decision. Is this a violation of due process? Probably.
Natural Language Processing (NLP): This is how AI “understands” human language, by breaking it into tokens and patterns. And the amazing thing is that your next contract (or last one) might be or have been reviewed by NLP. Which has to make you wonder (as the now-law-student that you are), when the AI system overlooks a crucial clause in its review, who’s liable? (The lawyer, for now.).
What My Students Are Grappling With
We then debated about whether certain technologies are “Definitely Not AI” “Maybe AI” to “Definitely AI.”
(In-person students were split)
In person students thought obviously AI, but what makes it so obvious?
In person students thought not AI today, but it was in 1960. Why is it not AI? Isn’t it input, something hidden to us, output?
In-person students guessed it was AI
In-person students thought only the summary at the top counted as AI, not the search itself.
What would have counted as AI yesterday, is now just mundane software (like search engines). Given the changing conceptions of AI, how do we write laws that won’t be obsolete by the time you all (yes, I mean you, too), graduate?
Your Homework (Yes, I mean you, virtual student)
Write a one-paragraph legal definition of AI that:
Won’t become obsolete as technology advances
Addresses whether “understanding” matters legally
Clearly identifies what we will and wont regulate as “AI”
For Full Students [Paid Subscribers]
[Readings, video assignments, and virtual chat-based office-hours details below]
Keep reading with a 7-day free trial
Subscribe to Thinking Freely with Nita Farahany to keep reading this post and get 7 days of free access to the full post archives.