Thinking Freely with Nita Farahany

Thinking Freely with Nita Farahany

Everyone Described Harm (Inside my Advanced Topics in AI Law and Policy Class #4)

Nobody Wanted to Sue

Nita Farahany's avatar
Nita Farahany
Feb 09, 2026
∙ Paid

8:30 AM, Wednesday. Welcome back to Professor Farahany’s Advanced Topics in AI Law and Policy Class.

This week, we’re talking about social media harms. And the timing could not be better. Right now, literally as I write this, there is a jury being seated in a Los Angeles courtroom for what may be the most consequential technology trial since Big Tobacco. We’ll get into the details of on Friday. But first, we need to understand why that trial is happening, what evidence supports the claims, and why none of the people your live counterparts interviewed from our Duke Law classroom think they should be able to sue.

Before diving into these materials, make sure to complete Week 1 (1, 1.2, and 1.3), Week 2 (2 (Can you pay attention), 2.2 (The attention evidence gap), and 2.3 (The laws that miss the point)), and Week 3 of classes (3 (20 Clicks to Cancel), 3.2 (Why Dark Patterns Work), and 3.3 (How Law Addresses Dark Patterns)).

We begin today by looking at the harms and the evidence. On Wednesday, we dive into the legal architecture that has shielded platforms for thirty years, and the crack that’s now appearing. Then on Friday, we’ll get into the constitutional constraints, the regulatory experiments, and the trial happening in real time.

Ready? Let’s dive in.

The Interview Assignment

Before class, I asked your live counterparts to interview someone, like a friend, a family member, or an acquaintance, about the impact of their experiences with social media on their mental state. What was the platform? What was the harm? What caused it? What did they do about it? Would they want to sue?

I encourage you to do the same. It turns out that talking to people about their experiences can tell you a lot, and this semester we are trying to get first-person live experience together with classroom learning.

So here’s my first question for you today:

Loading...

In the live class, every single interviewee described harm they experienced, and we’ll get to some of those in a moment. But when the live students asked those individuals whether they’d want to sue? Every single person said no.

That gap, between universal experiences of harm, and a universal refusal to seek redress, is the puzzle this entire week is built around.

The Harms: Three Categories

As students shared what their interviewees described, a pattern emerged. See if it matches your intuition.

Category 1: Social comparison on steroids. One interviewee described “constantly comparing herself to others, questioning her life choices.” Another couldn’t shake the feeling that everyone else’s life was “way more fun.” Another said that “no matter what they accomplished,” it never felt like enough.

This isn’t new, of course. Humans have always compared themselves to each other. But social media doesn’t show you reality. It shows you an algorithmically-amplified, curated, optimized version of other people’s best moments. The comparison isn’t to your neighbor’s actual life. It’s to a highlight reel that doesn’t exist.

Category 2: The algorithm that learns your vulnerabilities. One interviewee gave the most precise description I’ve read: Instagram “locked onto a very narrow set of themes much faster and then relentlessly reinforced them.” She tried to retrain it by searching for cooking videos. It worked, but only briefly. “One engagement post from a friend was enough to snap the algorithm right back.” Another interviewee’s OCD compulsions were reinforced by platforms that “provide endless opportunities to seek reassurance by searching and checking information repeatedly.”

A bookstore puts bestsellers at the front. An algorithm rearranges itself in real time based on your anxiety. Those are not the same thing.

Thinking Freely is a reader-supported publication. To support my work, consider becoming a free or paid subscriber.

Category 3: Design features independent of content. One interviewee’s harm had nothing to do with anything anyone said. WhatsApp’s “last seen” and “online status” features, which are pure platform-generated metadata, created “constant anxiety.” The harm was entirely a product of how the platform was built.

Loading...

Hold onto that answer. It’s going to matter on Wednesday, when we learn that the law treats these very differently.

“It’s On Me”

Now here’s where it gets interesting. I asked students to ask their interviewees: Would you want to sue?

The reasons they said no were remarkably consistent:

“It’s my responsibility.” One interviewee said it was his “responsibility as the consumer to be more aware of the negative impacts.” Another: “Nobody was putting a gun to their head.”

“I consented.” “These apps make you consent and accept their terms and conditions when you sign up.” “She knew what she was signing up for.”

“I can’t quantify it.” “She wouldn’t know how to quantify the harm in a way that would make a lawsuit realistic.”

“It’s futile.” “Pointless given the scale and resources these companies would have.”

Now think back to Week 2. Several students described their own behavior during the Attention Audit as “automatic,” “muscle memory,” “habit.” They caught themselves scrolling before they’d consciously decided to scroll.

One interviewee said she “knew what she was signing up for.” She also said she was trapped in a loop she couldn’t escape. Can both be true? If so, what does “consent” mean?

Another interviewee laughed when asked about suing. She said “she’s not being taken advantage of in any way.” And then the student who interviewed her observed that she’d just spent the entire conversation describing how the algorithms were manipulating her. She couldn’t articulate the legal wrong, even as she described the harm in detail.

Loading...

The Evidence: A Real Fight Among Serious People

So here’s the next question: Is the harm real? Not “do people feel harmed,” we see that clearly through personal and interview self-reporting. But is there evidence that social media causes the harm people describe?

This is where I need you to pay close attention. Because this isn’t a simple story.

The two questions people conflate:

Jonathan Haidt, social psychologist at NYU, author of The Anxious Generation, argues we need to separate two different questions:

The historical trends question: Did social media cause the teen mental health crisis?

The product safety question: Is social media safe for kids who use it today?

These are different. A product can be unsafe without being the sole cause of a population-level trend. Which question matters more for law?

Loading...

The product safety question is the one that matters for litigation (is the product defective?) and regulation (should we restrict it?). You don’t need to prove that cars caused all traffic deaths to regulate them as products.

What the evidence shows, and why smart people disagree about what it means:

Haidt organizes the evidence into seven lines. The strongest: recent meta-analyses of social media reduction experiments, which are randomized studies where people stop or reduce use, and find improvements in mental health. The effect sizes are roughly comparable to the effect of childhood maltreatment on depression risk.

And then there’s Meta’s own internal research. In 2021, Frances Haugen, a former Facebook product manager, leaked thousands of pages of internal documents. Among them was evidence that Meta had conducted its own studies finding that 32% of teen girls who felt bad about their bodies said Instagram made them feel worse. Through litigation discovery, additional research was unsealed, including “Project Mercury,” which was a a randomized controlled trial where Meta hired Nielsen to recruit users to deactivate their accounts for a month. Meta’s own researchers called it “one of our first causal approaches.” They found people who stopped using Facebook reported lower depression, anxiety, loneliness, and social comparison. A Meta researcher stated the study “does show causal impact.”

Meta found causal evidence of harm. And chose not to publish it.

The serious counter-position:

Amy Orben at Cambridge and Andrew Przybylski at Oxford published a specification curve analysis in Nature Human Behaviour, running millions of model permutations. Their finding was that digital technology use explained at most 0.4% of the variation in adolescent wellbeing. They compared it to wearing glasses or eating potatoes.

Haidt and Jean Twenge responded that Orben and Przybylski had made analytical choices that shrank the effect, essentially lumping all digital technology together instead of isolating social media, combining boys and girls when effects are larger for girls. When they reran it for social media and girls only, the correlation was roughly r = .20. Orben didn’t dispute that number. The disagreement is what it means.

The NASEM issued a consensus report in 2024 finding “small effects and weak associations.” Platforms have cited this in their legal defense. But a published critique noted that two committee members had received industry funding, and drew a direct parallel to how “industry-funded research has muddied the waters of the scientific literature” on tobacco, guns, and alcohol.

The twist:

In April 2025, Orben herself, the methodological critic Haidt most disagrees with, published an article in Science with J. Nathan Matias titled “Fixing the Science of Digital Technology Harms.”

Their argument was that the problem isn’t that social media is safe. The problem is that the scientific infrastructure is broken. Technology companies outsource safety research to underfunded academics while blocking access to the data needed to study their products. Companies then use the resulting lack of hard evidence to resist regulation, which is exactly as the tobacco, chemicals, and firearms industries did.

Orben and Matias propose a “minimum viable evidence” system, adjusting the evidence threshold based on the severity and reversibility of potential harm, rather than requiring certainty before acting. They argue that requiring definitive proof of causation before regulating is itself an industry strategy.

So the leading methodological skeptic is not saying “social media is fine, don’t regulate.” She’s saying: “The science is deliberately kept inadequate by the companies whose products we’re trying to study, and that inadequacy shouldn’t prevent action.”

Loading...

The Surgeon General put it this way in 2023: “We cannot conclude social media is sufficiently safe for children and adolescents.” That’s not saying it’s proven harmful. It’s saying the burden should be on the manufacturer. Like any consumer product.

What’s Coming This Week

We have a gap. People experience real harm. They can describe it. But they can’t articulate a legal wrong, and they don’t believe the legal system can help them.

Wednesday, we’ll see why. Section 230 of the Communications Decency Act has shielded platforms from liability for nearly thirty years. But a 2021 case, Lemmon v. Snap, cracked the wall open. There’s a distinction emerging between content claims and design claims. Whether the harms your interviewees described are legally reachable depends entirely on which side of that line they fall.

Friday, we’ll add the First Amendment, the regulatory experiments (Utah’s comprehensive law, KOSA, Australia’s under-16 ban), and the trial happening right now in Los Angeles.

By the end of the week, you’ll understand why this moment feels like the early days of tobacco litigation, and why it might be even harder.

See you Wednesday.

The entire class lecture is above, but if you’d like to support my work or go deeper in your learning, please upgrade to being a “paid subscriber.”

Paid subscribers also get access to class readings packs, discussion questions, bonus content, full archives, virtual chat-based office hours, additional readings, as well as one live Zoom-based class session per semester.

Want access to the archives, bonus questions and more? Upgrade to being a paid subscriber.

User's avatar

Continue reading this post for free, courtesy of Nita Farahany.

Or purchase a paid subscription.
© 2026 Nita Farahany · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture