Thinking Freely with Nita Farahany

Thinking Freely with Nita Farahany

The Shield: Section 230 (Inside by Advanced Topics in AI Law and Policy, Class #4.2)

Twenty-six words and 30 years

Nita Farahany's avatar
Nita Farahany
Feb 11, 2026
∙ Paid

Twenty-six words. That’s how many it took to shield every social media platform in America from nearly every lawsuit for nearly thirty years. Twenty-six words written in 1996, before Google existed, before Facebook, before the smartphone, before the algorithm that learned what makes you anxious and fed you more of it.

Today we’re reading those twenty-six words. And we’re reading the case that found their limit.

8:30 AM, Wednesday. Welcome back to Professor Farahany’s Advanced Topics in AI Law and Policy Class! We’re on Week 4 of class. If you haven’t taken Monday’s class, where we covered the interview results and evidence debate about social media harms, do that first. Today builds directly on that material.

And there’s still time to get fully caught up this semester! Start with Week 1, class 1 (From How AI Works to What AI Does) to gain a foundational on understanding autonomy and the impact of digital technologies on our well being. Then dive into Class 1.2 on What AI Does to Your Thinking, and Class 1.3 on Protecting Autonomy in Law, Take 1. Week 2 shifts to the impact of digital technologies on attention, beginning with our attention audit in class 2 (Can you pay attention?), 2.2 (The attention evidence gap), and 2.3 (The laws that miss the point)). And in Week 3 we looked at Dark Patterns, beginning with Class 3 (20 Clicks to Cancel), 3.2 (Why Dark Patterns Work), and 3.3 (How Law Addresses Dark Patterns)).

a group of different social media logos
Photo by Mariia Shalabaieva on Unsplash

On Monday, we saw an important gap between the harms that people are experiencing on social media, and whether they think they those harms are legally redressable. Today we’re going to discuss why that gap exists. And what’s starting to close it.

Which means we need to talk about Section 230.

The Law That Built the Internet (And Shielded Everything On It)

You’ve likely heard of Section 230 of the Communications Decency Act. It’s been called “the twenty-six words that created the Internet.” But most people get it wrong. Let’s get it right.

Here’s what it says:

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

In plain English that means that if someone posts something on your platform, you, the platform, can’t be treated as if you said it. The user is the publisher, not the platform.

Loading...

Whatever you answered, I want you to understand why Congress enacted this. Because the origin story reveals something important about how we got here.

The Perverse Incentive

In 1995, an anonymous user posted defamatory statements about the securities firm Stratton Oakmont on Prodigy, one of the early online services. (Yes, that Stratton Oakmont, the one from The Wolf of Wall Street. Even their legal battles were dramatic.)

Prodigy had a policy of moderating content and it tried to keep things civil. The court looked at that moderation effort and said, because you exercise editorial control, you’re more like a newspaper publisher than a passive bulletin board. And newspapers are liable for what they publish.

Think about the incentive structure that creates. If trying to clean up your platform makes you more liable, what’s the rational response?

Thanks for reading Thinking Freely. Share this post if you found it interesting!

Share

Don’t moderate at all. Let everything through. If you see child exploitation, hate speech, defamation, look the other way, because the moment you start removing some content, a court might hold you responsible for all of it.

Congress saw this perverse incentive and intervened. Section 230(c)(1) says: hosting others’ content doesn’t make you a publisher. And (c)(2), which is the “Good Samaritan” provision, adds that you don’t lose that protection by voluntarily moderating in good faith.

The combination was meant to create a virtuous cycle where platforms would host open discourse and have incentives to remove the worst content, without fear that moderation would create liability.

For almost thirty years, Section 230 has been the legal bedrock beneath every social media platform. Nearly every lawsuit claiming “the platform hurt me by hosting harmful content” has died on this hill.

And then three teenagers hit a tree.

Lemmon v. Snap: The First Crack

Here are the facts. I want you to decide how the court should rule before I tell you what happened.

Jason, age 17, Hunter, age 17, and Landen, age 20 were driving in Walworth County, Wisconsin. They opened Snapchat’s Speed Filter, a feature that overlaid their real-time speed on a photo. They hit 123 miles per hour then hit a tree at 113 mph.

All three of them tragically died.

The parents sued Snap. Their theory was that Snap negligently designed the Speed Filter. The interplay between the filter and Snapchat’s broader reward system, using streaks, trophies, social competition, incentivized users to drive at dangerous speeds to create impressive snaps.

Snap’s defense was Section 230. Whatever happened, it happened because a user chose to use the filter and post a snap. The content, the snap itself, was the user’s. Snap just provided the platform, so Publisher immunity applies.

Loading...

If you answered C, congratulations, you just identified the distinction that is reshaping technology law.

The Distinction

The Ninth Circuit held that Section 230 did not protect Snap in this case. And the reasoning here is crucial.

The court asked, what are the plaintiffs actually claiming? Are they suing Snap for something a user said or posted? Or are they suing Snap for how Snap built its product?

The answer was the second. The parents weren’t arguing that any particular snap was harmful. They weren’t asking Snap to take down a post. They were arguing that the Speed Filter itself, as a design feature, was defective.

The court’s key language was that “Their negligent design lawsuit treats Snap as a products manufacturer, accusing it of negligently designing a product with a defect. The duty that Snap allegedly violated springs from its distinct capacity as a product designer.”

Thinking Freely is a reader-supported publication. To support my work, consider becoming a free or paid subscriber.

Read that again. Products manufacturer. Not a publisher or speaker but a manufacturer.

Section 230 governs publishers. Products liability governs manufacturers. Snap was acting as both, and this claim targeted the manufacturer function.

Even the EFF, the Electronic Frontier Foundation, which normally defends the broadest possible reading of Section 230, endorsed this holding. Their position was that platforms can be sued for “defective tools, so long as plaintiffs’ claims do not blame them for the content that third parties generate with those tools.”

Which brings us to the the emerging framework:

Content claims (what users post, how the platform moderates)?

  • Section 230 shields the platform

Design claims (how the product is built, independent of any particular post)?

  • Section 230 does not apply. The platform is a manufacturer, not a publisher.

Your Classification Exercise

Now I want you to do something. Remember Monday’s three categories of harm? Go back to them, and to the poll answer you held onto. Now let’s classify.

Take each harm your live counterparts’ interviewees described. Which side of the line does it fall?

The algorithm locking onto distressing themes and relentlessly reinforcing them: Content claim or design claim?

  • Think about it. The content is what other users posted. But the mechanism—the algorithm that selected, amplified, and repeated distressing material—is a design feature. It’s the platform’s product. After Lemmon, this looks like a design claim. Section 230 shouldn’t block it.

WhatsApp’s “last seen” causing constant anxiety: Content or design?

  • This is the cleanest case. No user generated the “last seen” timestamp. It’s platform-created metadata. Pure design. Not a publishing decision in any sense.

Comparing yourself to others’ curated posts: Content or design?

  • This one’s harder. The posts themselves are user content. But the algorithmic curation, the fact that the platform selects comparison-triggering content and surfaces it in your feed, is a design choice. Plaintiffs would argue the design feature creates the context in which user content causes harm. Like the Speed Filter didn’t force anyone to speed, but it gamified speeding within a reward system designed to maximize engagement.

Platform failing to remove hate speech: Content or design?

  • This is a content moderation problem. The complaint is that the platform didn’t exercise its editorial judgment to remove harmful material. That’s an editorial decision. Section 230 was designed to protect exactly this.

Loading...

The honest answer is probably B. The easy cases are easy. The Speed Filter is a product. “Last seen” is a product. Nobody disputes that. But the algorithm, the thing that arguably causes the most harm, sits right on the boundary. Is the recommendation algorithm a design feature (like a defective steering mechanism) or an editorial judgment (like a newspaper editor choosing the front page)?

That question is exactly what the First Amendment fight is about. And that’s where we’re going on Friday.

The Tension You Should Be Sitting With

Here’s what I want you to hold in your mind:

On Monday, your live counterparts’ interviewees said, in so many words: “It’s my fault. I chose to use these platforms. I consented.”

Lemmon says: when a product is designed in a way that makes it dangerous is that even if the user chose to use it, the manufacturer can be liable. Three teenagers chose to open the Speed Filter. They still died because of a design defect. Choice and design defect are not mutually exclusive.

One of the interviewees instinctively drew a line: he rejected a lawsuit for himself but said that “for children, addictive media consumption feels predatory because they are still developing.” Personal responsibility for adults. Product accountability for children.

But even that line raises questions. If the design is the same for adults and children, the same algorithmic amplification, the same infinite scroll, the same variable reward schedule, then either the design is defective for everyone or it isn’t. The user’s age changes the harm, but does it change the defect?

Loading...

What’s Coming Friday

We have one more wall to get through.

Even if Section 230 doesn’t block design claims, even if Lemmon opens the door, platforms have another powerful defense. And it’s constitutional.

In Moody v. NetChoice (2024), the Supreme Court suggested that algorithmic curation might be editorial discretion, protected by the First Amendment. If the algorithm is speech, regulating it might violate the Constitution.

But Justice Barrett’s concurrence opened a crack, where algorithms that simply maximize engagement, without implementing any editorial vision, might not be speech at all.

On Friday, we’ll work through that constitutional constraint, look at what states are actually doing (Utah’s comprehensive law is fascinating), and connect it all to the trial that’s happening right now in Los Angeles. Mark Zuckerberg is expected to testify. Meta’s internal research, including Project Mercury, will be presented to a jury. The question before them: did these companies design a defective product?

The tobacco parallel isn’t hyperbole. The playbook is identical. But the constitutional terrain is much, much harder.

Class dismissed. See you on Friday.

The entire class lecture is above, but if you’d like to support my work or go deeper in your learning, please upgrade to being a “paid subscriber.”

Paid subscribers also get access to class readings packs, discussion questions, bonus content, full archives, virtual chat-based office hours, additional readings, as well as one live Zoom-based class session per semester.

User's avatar

Continue reading this post for free, courtesy of Nita Farahany.

Or purchase a paid subscription.
© 2026 Nita Farahany · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture