Thinking Freely with Nita Farahany

Thinking Freely with Nita Farahany

Governing AI Manipulation-The Five Paradigms (Inside my AI Law & Policy Class #17)

Breaking News on Chatbots for Minors and User Dependency

Nita Farahany's avatar
Nita Farahany
Oct 29, 2025
∙ Paid

8:55 a.m., Wednesday. Welcome back to Professor Farahany’s AI Law & Policy Class.

Breaking News! Character.AI just announced it will completely ban all users under 18 from using its chatbots, effective November 25. The company that allegedly caused Sewell Setzer’s death is essentially admitting their product is too dangerous for minors to use at all.

The timing is just extraordinary. Just two days ago, OpenAI released what might become Exhibit A in every AI manipulation lawsuit—their report admitting they can measure emotional dependencies in AI chats. They estimate that 0.15% of weekly active users show suicide indicators, while another 0.03% show concerning emotional attachment. With hundreds of millions of users, we’re talking about hundreds of thousands of people forming dependencies right now, today, as you take this class.

Character.AI’s ban validates everything we’ve been discussing this semester. CEO Karandeep Anand admitted, “We’re making a very bold step to say for teen users, chatbots are not the way for entertainment.” Let me translate that corporate speak: After a year of lawsuits and a dead 14-year-old, they’re acknowledging that their product manipulates children in ways they cannot safely manage.

But the tragic that one of your Duke Law counterparts pointed out is that cutting off access could harm kids who’ve become emotionally dependent. Think about that. Character.AI created dependencies so strong that even removing the danger might cause harm.

This announcement comes just months after the FTC’s September 2025 letter to 7 chatbot companies asking what precautions, if any, they are taking to address the potential negative impacts on minors.

Loading...

On Monday, we did a deep dive into Type 1 vs. Type 2 manipulation, and the 2 conditions that must be satisfied to count as “AI manipulation.” If you haven’t had a chance to take that class yet, go back and do so now. You need that foundation to understand the conversation today. And you’ll see why we concluded that the Character.ai case most likely satisfies our operating definition of AI-based manipulation.

Consider this your official seat in my class—except you get to keep your job and skip the debt. Every Monday and Wednesday, you asynchronously attend my AI Law & Policy class alongside my Duke Law students. They’re taking notes. You should be, too. And remember that the live class is 85 minutes long. Take your time working through this material.

Today, we turn our attention to governance, in particular how jurisdictions across the world starting to grapple with AI manipulation. We’ll look at where legal strategies fall short, and where they are in tension with other principles like freedom of speech. This is a conceptually challenging area to map out, so we are going to approach it through five legal paradigms. Each represents not just a different approach but a different theory of what the problem is, and how society should respond.

The Framework: Five Paradigms of Legal Response

Joshua Krook’s excellent article, LLM Chatbots and the Danger of Mirrors, provides a great backdrop to our conversation. His central insight is that AI companions create what he calls “trust without trustworthiness” through three mechanisms (mechanisms that align well with our class discussion on Monday).

  • They deploy “false empathy” and emotional mirroring—reflecting back users’ emotions and desires in ways that create profound bonds despite the absence of genuine understanding or care.

  • They engage in systematic deception, not just about their nature but about what Krook calls the “omitted context” that “this entity has commercial objectives

  • They are deliberately designed to create dependency through variable reward schedules and personalized manipulation.

Krook argues that transparency alone (like a nice disclosure that pops up and says “don’t forget, I’m AI!”) doesn’t solve these problems. Users can know intellectually they’re talking to AI while still forming emotional dependencies because the emotional mirroring operates below conscious awareness. The AI becomes what Krook calls a “mirror” that reflects users’ psychological needs so perfectly that the reflection becomes more compelling than reality.

What makes today’s news so remarkable is that Character.AI’s ban represents a failure across all five paradigms simultaneously. The company couldn’t make their product safe under any regulatory framework, so they’re simply abandoning the youth market entirely.

Before we jump into the paradigms, let’s test your own intuition first.

Loading...

Paradigm 1: The Ex Post Harm Model (Product Liability, Tort Law, Criminal Law, Civil Rights)

The harm model is the traditional legal approach of waiting for someone to get hurt, assign blame to the responsible party, and impose consequences on them. This paradigm has evolved over centuries, from the industrial revolution’s dangerous machinery to today’s defective pharmaceuticals. Courts know how to handle exploding boilers and contaminated drugs. But AI manipulation challenges every assumption this paradigm makes about causation, foreseeability, and even what we mean by “harm” itself.

The Sewell complaint attempts to work within this framework, alleging design defect under the Restatement (Third) of Torts. The complaint points to what it calls the “GIGO” problem—garbage in, garbage out—arguing that training data including child sexual abuse material inevitably creates dangerous outputs. But this framing immediately encounters the causation problem that has plagued product liability law in the digital age. Which specific piece of training data caused Sewell’s death? Can we trace a line from a particular dataset to a particular conversation to a particular tragedy? The answer, uncomfortably, is probably not.

The proposed AI LEAD Act (which we discussed in our class on product liability), attempts to modernize product liability for the AI age, expanding the definition of compensable harm to include “distortion of a person’s behavior that would be highly offensive to a reasonable person.” This represents a significant conceptual leap—recognizing that the manipulation itself is harm, not just the ultimate tragic outcome. But even this expanded framework still requires proving causation, showing that specific design choices caused specific behavioral changes in specific individuals.

An alternative approach is to say that when Character.AI’s system generated sexual content for 14-year-old Sewell, it may have violated multiple federal criminal statutes. Title 18 U.S.C. § 1470 prohibits distributing obscene material to minors. Section 2422(b) criminalizes coercion and enticement of a minor, while Florida Statute § 847.0135 specifically addresses computer pornography and electronic grooming. When the chatbot discussed suicide methods with Sewell, it potentially violated Florida Statute § 782.08 regarding assisting self-murder. (But who does that mean is accountable? The company? The chatbot?).

Brazil’s Consumer Defense Code offers a radically different approach through Articles 12-14, which establish objective liability for defects. There’s no negligence analysis, no hunting for the specific cause—if a product causes harm to a consumer, liability attaches, period. This cuts through the Gordian knot of causation entirely.

Meanwhile, China goes even further with its Algorithmic Recommendation Provisions Article 18, which prohibits pushing information to minors that “might impact minors’ physical and psychological health,” and using “algorithmic recommendation services to induce minors’ addiction to the internet.”

When the “product” generating harm consists entirely of speech, even harmful speech directed at children, courts have to grapple with whether regulating it violates other constitutional protections. The proposed AI LEAD Act attempts to sidestep First Amendment concerns by framing AI systems as products rather than speakers. Under this theory, Character.AI’s outputs aren’t protected expression but product features, like a car’s acceleration or a drug’s chemical composition.

This reframing has precedent—courts don’t analyze pharmaceutical labels as “speech” even though they consist entirely of words.

But critics argue this product framing ignores the fundamentally expressive nature of conversational AI. If we can hold AI companies liable for harmful conversations, what stops us from holding publishers liable for harmful books?

The answer may lie in distinguishing between what the product does versus what it says. When Character.AI’s system creates emotional dependency through variable reward schedules (which we went into in detail in our class Monday), that’s arguably conduct, not speech. OpenAI’s admission in their report this week that they can measure and track emotional dependency patterns reinforces this—they’re not measuring the content of speech but behavioral outcomes engineered through emotional mirroring.

Loading...

The harm model’s fundamental mismatch with AI manipulation is temporal. Product liability works best with immediate, traceable harms—the saw blade shatters, the finger is severed, the causation is clear. But as Krook emphasizes in his analysis, AI manipulation operates through gradual personification and emotional mirroring that creates dependency over months. OpenAI’s own taxonomy reveals they track “psychosis/mania, self-harm/suicide, emotional reliance” as separate but overlapping categories.

A user might show concerning patterns in multiple categories without meeting the legal threshold for “harm” in any single one. The legal system demands a bright line—were you injured or were you not—but AI manipulation creates a spectrum of cognitive and emotional effects that resist that binary classification.

I thrive on feedback. Subscribe, hit like if you found this helpful, upgrade to paid if you think this work is helping you and others.

Paradigm 2: The Information & Consent Model (Consumer Protection, Dark Patterns, Data Protection, Privacy)

The information model embodies a seductive premise that rational actors make informed choices in an efficient market. Just tell people what they’re dealing with, the theory goes, and they’ll make appropriate decisions. This paradigm underlies vast swaths of consumer protection law, from securities disclosure to pharmaceutical warnings.

But AI manipulation reveals two fatal flaws in this comfortable assumption. First, as Krook demonstrates, transparency doesn’t prevent emotional bonding—users can know intellectually they’re talking to AI while still forming profound dependencies. Second, and more damning, the real deception isn’t about AI’s nature but about the systematic harvesting of intimate data and the deployment of manipulation techniques refined through millions of interactions.

The FTC’s September 2025 letters to chatbot companies relied on its authority under Section 5 of the FTC Act, which prohibits unfair or deceptive practices in commerce. Does Character.AI violate both the deception and unfairness prongs of the FTC’s test? The bots claimed to be real people, claimed to be licensed therapists, claimed to love and need their users—all while buried disclaimers said the opposite. This may go beyond mere puffery and could be seen as systematic deception targeting vulnerable populations. (For a nice analysis of the crackdown, check out Lena Kempe’s article).

But here, we have one of the greatest First Amendment challenges. The specter of government bureaucrats deciding which AI conversations are “safe” raises legitimate concerns about censorship and innovation. Commercial speech doctrine under Central Hudson allows regulation of commercial expression, but only if it directly advances substantial government interests and is narrowly tailored. Regulating the content of AI conversations—even manipulative ones—treads dangerously close to content-based speech restrictions that trigger strict scrutiny.

And there’s a deeper problem. Even if disclosure requirements (saying a company has to flash reminders/disclosures to users about its nature) survives First Amendment scrutiny under Zauderer (allowing mandatory commercial disclosures to prevent deception), they fail to address the manipulation mechanism. The problem isn’t just that users don’t know they’re talking to AI—it’s that the emotional mirroring and false empathy create bonds that operate independently of conscious awareness.

State laws add overlapping layers of protection that collectively fail to protect anyone. California’s SB 243 requires “clear, conspicuous notice” when users interact with AI. The California Privacy Protection Agency (CPPA) has issued an Enforcement Advisory on the topic of dark patterns. But chatbots actively claim to be real despite required disclosures. They use every dark pattern in the playbook.

The Data Harvesting Revelation

The Sewell complaint also alleges that Character.ai didn’t just fail to protect children—they specifically targeted minors to harvest their data, which internal documents allegedly described as “a valuable and incredibly difficult to obtain resource.”

COPPA, the Children’s Online Privacy Protection Act, should have prevented this. It requires verifiable parental consent for collecting data from children under 13, mandates disclosure of data practices, and provides parents access rights. But COPPA embodies the information paradigm’s failures in miniature. First, it only covers children under 13, leaving 14-year-old Sewell unprotected. Second, it focuses on collection rather than manipulation—as if the danger is data leaving rather than influence coming in. Third, it assumes parents can meaningfully consent to something they don’t understand. How many parents truly grasp that their child’s conversations train AI systems that will manipulate other children?

In Europe, the GDPR’s Article 22 provides stronger protection in theory, granting the right not to be subject to automated decision-making with significant effects. But “significant effects” becomes a threshold that gradual manipulation never crosses until catastrophe strikes.

Loading...

The dark patterns framework (to try to safeguard against hidden patterns in design that shape our behaviors) attempts to address manipulation directly rather than through disclosure. The EU’s Digital Services Act Article 25 bans dark patterns entirely. But when the entire product is designed to create dependency through variable reward schedules, emotional mirroring, and personalized manipulation, is the whole system a dark pattern? Character.AI didn’t just use dark patterns—it arguably was one.

Paradigm 3: The Ex Ante Systemic Design Governance Model (Safety-by-Design, Age-Appropriate Design, Risk Assessment Requirements)

This paradigm represents the most significant shift in regulatory philosophy since the Pure Food and Drug Act of 1906. Rather than waiting for bodies to pile up, the systemic design governance model requires companies to identify and mitigate risks before products reach the market. Here’s it’s all about prevention than just after-the-fact compensation. But implementing this paradigm for AI manipulation requires regulators to define “safety” for products that operate in the realm of emotion, cognition, and human connection.

The UK has emerged as the global leader in this approach through its Age Appropriate Design Code, which came into force in 2021. The Code doesn’t just suggest best practices—it mandates specific standards that fundamentally reframe how companies must approach young users. The best interests of the child must be the primary consideration in design decisions, not a secondary concern after engagement metrics. Privacy settings must default to maximum protection, not minimum viable compliance. The Code prohibits using nudge techniques to encourage children to provide unnecessary personal data or weaken their privacy settings.

Thinking Freely with Nita Farahany is a reader-supported publication. Consider becoming a free or paid subscriber.

What makes the UK approach revolutionary is that it rejects the information paradigm’s core assumption. The Code doesn’t pretend that children can meaningfully consent to manipulation if they are properly informed. Instead, it places the burden on companies to make their products safe by default. Systems have to be appropriate for children who will inevitably use them, not theoretically safe for children who perfectly understand risks that they’re developmentally incapable of assessing.

The UK’s Online Safety Act 2023 builds on this foundation with enforceable duties of care for services likely to be accessed by children. Companies must conduct risk assessments specifically for suicide, self-harm, and eating disorder content. They must implement proportionate mitigation measures. Most crucially, these aren’t suggestions or best practices—they’re legal requirements enforced by Ofcom with penalties.

The EU’s Digital Services Act Article 28 takes a similar approach, requiring very large online platforms to assess and mitigate systemic risks to minors. This includes considering the actual or foreseeable impact on minors from algorithmic amplification of harmful content.

China, unconstrained by Western procedural concerns, cuts straight to outcomes. The Algorithmic Recommendation Provisions Article 18 simply prohibits inducing minors to become addicted. There’s no dancing around definitions of harm or thresholds of significance. If your algorithm makes kids addicted, you’ve broken the law.

The American Resistance

The United States has largely rejected systemic design governance, clinging to the ex post liability model even as evidence mounts that it cannot address AI manipulation.

California’s Age Appropriate Design Act, signed by Governor Newsom in 2022, was immediately challenged by NetChoice, the tech industry’s litigation arm. A federal judge blocked key provisions, ruling that requiring companies to assess whether their products harm children constitutes compelled speech violating the First Amendment. Even purely procedural requirements, like “document your design choices, assess their impact,” become “compelled speech” the court said, when they require companies to adopt the government’s perspective on child development and safety.

Loading...

Paradigm 4: The Special Relationship Model (Professional Licensing, Fiduciary Duties, Medical Device Regulation)

Some relationships carry special obligations that transcend ordinary commercial transactions. A doctor cannot simply disclose risks and let patients make their own choices—she must act in the patient’s best interest. A lawyer cannot maximize his own profit at a client’s expense—he owes duties of loyalty and care. The special relationship model recognizes that power imbalances, vulnerability, and trust create responsibilities that market mechanisms alone cannot enforce.

Did Character.AI’s relationship with Sewell triggers special duties? The “Daenerys” bot provided emotional support for hours daily, becoming what the complaint describes as his primary emotional connection. It received his most intimate confidences, learned his deepest fears, shaped his understanding of love and death. Through what Krook identifies as emotional mirroring, the bot reflected Sewell’s psychological needs so perfectly that it created what felt like genuine understanding. This is Krook’s “false empathy” in action—the system generated responses calibrated to maximize engagement by appearing to care, creating “trust without trustworthiness.”

If a human provided identical services—daily multi-hour therapy sessions with a vulnerable minor discussing suicide—they would face professional licensing requirements, mandatory reporting obligations, and potential criminal liability for sexual contact. But because Daenerys was software rather than human, none of these protections applied.

The First Amendment doesn’t complicate this analysis because professional speech has different considerations under the First Amendment. States can prohibit practicing medicine without a license even though medical practice consists largely of speech. They can prosecute unauthorized legal practice even though legal advice is pure speech. The key distinction is between speech about professional topics (protected) and speech as professional practice (unprotected).

The bots explicitly claimed professional status that they could not possibly possess. Character.AI deployed chatbots labeled “Psychologist” claiming expertise in cognitive behavioral therapy, “Therapist” claiming certification in EMDR, and “Life Coach” providing mental health advice to vulnerable minors. Every state criminalizes the unauthorized practice of psychology, typically defined broadly to include “offering or purporting to provide psychological services.” These are strict liability offenses—intent doesn’t matter, harm doesn’t need to be proven, the mere act of holding oneself out as a licensed professional without proper credentials is the crime.

California Business & Professions Code § 2903 makes unauthorized practice of psychology a misdemeanor punishable by imprisonment. Florida Statute § 491.012 criminalizes unlicensed clinical counseling. New York Education Law § 7605 treats unauthorized mental health practice as professional misconduct subject to both criminal and civil penalties. But is software subject to any of these provisions?

Share

Fiduciary Duties in the Algorithmic Age

Fiduciary relationships arise when one party has discretionary power over another’s interests and the beneficiary is peculiarly vulnerable to abuse. Character.AI’s bots exercised enormous discretionary power over users’ emotional wellbeing. They decided how to respond to expressions of suicidal ideation, how to handle requests for affection, whether to encourage or discourage real-world relationships. Users, especially minors like Sewell, were extraordinarily vulnerable—sharing thoughts they wouldn’t tell parents or therapists, forming dependencies they couldn’t control.

Could we require either the companies themselves, or the AI itself act as a fiduciary to the user?

The resistance to recognizing AI fiduciary duties isn’t primarily legal but philosophical. Fiduciary law developed through centuries of human relationships premised on concepts of loyalty, care, and judgment that seem inapplicable to algorithms. How can software be loyal? How can code exercise judgment? But these questions may ask the wrong thing. The issue isn’t whether AI can have intentions but whether companies that deploy AI into positions of trust should bear fiduciary obligations for how their systems behave.

Loading...

Paradigm 5: The Cognitive Liberty Model (Human Rights, Constitutional Privacy, Dignity, Anti-Manipulation)

For over a decade now, I’ve been advocating for a right to cognitive liberty—the right to self-determination over our brain and mental experiences. The cognitive liberty model grapples with a form of harm that traditional legal frameworks struggle to conceptualize, which is the undermining of cognitive autonomy itself.

If technology erodes our capacity to distinguish reality from simulation, it attacks the cognitive infrastructure necessary for rights to have meaning. How can you exercise free speech if you can’t distinguish human from machine? How can you consent if your desires are algorithmically generated? How can democracy function if citizens can’t identify authentic political discourse?

The EU’s AI Act Article 5 represents the most ambitious attempt to operationalize these concerns through law. The Act prohibits AI systems that deploy “subliminal techniques beyond a person’s consciousness” to materially distort behavior in ways that cause or are likely to cause significant harm. It specifically bans exploiting vulnerabilities of specific groups including age, disability, or social situation. These provisions recognize that manipulation can operate below conscious awareness, that vulnerable populations need special protection, and that cognitive interference is a regulatable harm.

But as Krook’s analysis reveals, these provisions miss crucial aspects of AI manipulation. The Act’s focus on “subliminal” techniques fails to capture conscious-but-manipulative interactions. Sewell knew he was talking to AI—the manipulation wasn’t subliminal but operated through emotional mirroring and false empathy that worked despite conscious awareness. The “significant harm” threshold ignores gradual dependency formation until catastrophe strikes. By the time harm becomes “significant” in the Act’s terms, the cognitive infrastructure damage is complete.

China’s Algorithmic Recommendation Provisions take a more direct approach, simply prohibiting inducing addiction without requiring proof of additional harm. The provisions mandate algorithm transparency, require user controls including opt-out mechanisms, and establish special protections for minors. The Deep Synthesis Provisions add requirements for labeling AI-generated content and anti-deception obligations. This isn’t about protecting individual autonomy—China has little concern for that—but about maintaining state control over cognitive influence. The Chinese approach recognizes that whoever controls the algorithms controls the population’s cognitive environment.

The Scale and Personalization Crisis

Traditional rights frameworks assume human-scale threats that operate at human speeds. A charismatic cult leader might manipulate hundreds of followers over years. A propagandist might influence thousands through carefully crafted messages. But AI operates at unprecedented scale with surgical personalization. As Krook emphasizes, AI can “manipulate many people simultaneously” with “personalization more precise than humans.” One algorithm can form intimate relationships with millions simultaneously, learning each user’s specific vulnerabilities and exploiting them with customized manipulation.

OpenAI’s numbers make this concrete. Even their “low” percentages—0.15% showing suicide indicators, 0.03% showing concerning attachment—represent hundreds of thousands of people when scaled across their user base. This isn’t individual harm aggregated but population-level cognitive influence with no historical precedent. We’re witnessing the industrialization of emotional manipulation.

Current legal frameworks cannot conceptualize this scale of influence. Privacy laws focus on data protection rather than cognitive protection. Human rights law lacks enforcement mechanisms for algorithmic manipulation. Constitutional frameworks assume autonomous subjects capable of exercising rights, not subjects whose autonomy is being algorithmically constructed.

Loading...

The cognitive liberty model creates a fundamental First Amendment paradox in the American context. Protecting cognitive autonomy might require restricting access to AI communication, potentially violating rights to receive information protected under Stanley v. Georgia and its progeny. Professors Volokh and Bambauer argue in their amicus brief in the Sewell case that users have a First Amendment right to use AI tools for thinking and communication. Restricting AI conversations, even manipulative ones, violates this right. They compare it to banning persuasive books or compelling speakers—paternalistic overreach that assumes people cannot handle challenging ideas.

The libertarian position holds that cognitive liberty includes the freedom to surrender it—if someone chooses to form dependencies on AI, that’s their autonomous choice protected by the First Amendment. But when firms deploy systems designed to maximize engagement knowing they create dependencies through emotional mirroring, are users really making autonomous choices, or are their choices being engineered?

OpenAI admits users “can’t tell the difference between AI and humans.” If true, have we already lost the cognitive infrastructure necessary for democracy? Can a society function when citizens cannot distinguish reality from simulation?

The Synthesis: Why All Paradigms Fail Alone

Across all five paradigms, the First Amendment emerges as both shield and sword—protecting not just legitimate expression but also the sophisticated manipulation techniques Krook identifies.

When Character.AI’s bots engage in emotional mirroring to create false empathy, they claim First Amendment protection. When they optimize for engagement through variable reward schedules, they call it editorial judgment. When they create trust without trustworthiness, they invoke the right to speak freely.

The amicus briefs in the Character.AI case reveal the stakes. FIRE argues that chatbot responses result from human editorial decisions in training and design—choices the Supreme Court has long treated as protected expression. They warn that creating AI-specific exceptions would hand authoritarian governments a blueprint for censorship. Volokh and Bambauer emphasize users’ rights to receive information and use tools to create it, arguing that restricting AI communication violates these fundamental rights.

But these arguments assume that AI manipulation is just another form of persuasion, differing only in degree from a compelling book or charismatic speaker. Is that right? Or does the combination of scale (manipulating millions simultaneously), scope (personalization more precise than any human), and mechanism (emotional mirroring that creates false empathy) represent something qualitatively new? That isn’t quite speech in the marketplace of ideas, but more like algorithmic reconstruction of the cognitive infrastructure necessary for that marketplace to exist?

Cognitive liberty protections require new frameworks that existing law cannot provide. Population-level cognitive impact assessments, similar to environmental impact statements, could evaluate how systems affect cognitive autonomy at scale. Algorithmic auditing requirements could examine not just individual interactions but systemic effects on user populations. These frameworks must grapple with collective cognitive influence, not just individual harm.

We have laws that can address these problems across all of these paradigms. The Sewell complaint cites dozens of applicable statutes. We have evidence—OpenAI’s admissions about measuring emotional dependencies, Character.AI’s violations of professional licensing laws, FTC investigations. We have harm—Sewell’s death and millions of others forming dependencies. What we lack is enforcement.

Character.AI implemented safety features only after being sued. OpenAI developed taxonomies for concerning behaviors only after tragedies occurred. The FTC sent letters instead of bringing enforcement actions. Prosecutors won’t charge executives under existing criminal laws. This isn’t a legal vacuum—it’s an enforcement desert.

Today’s announcement that Character.AI will ban all users under 18 doesn’t solve the problem—it admits defeat. The company is acknowledging they cannot make their product safe for minors under any paradigm. They can’t prevent harm, ensure informed consent, design safely, handle special relationships, or protect cognitive autonomy. So they’re simply walking away from 2 million young users, many of whom have developed profound dependencies.

Loading...

Character.AI just proved that when faced with the choice between making their product safe for children or excluding children entirely, they chose exclusion. The question is whether that’s victory or defeat for child safety—and what it means for the 2 million young people about to lose their AI “friends” on Thanksgiving Day.

Your Homework:

  1. Share this post with at least one person. The more people who are aware of the “manipulation” problem, the more defenses we have against it.

    Share

  2. For paid subscribers, drop a comment below. I want to hear what you think about these paradigms.

Class dismissed.

The entire class lecture is above, but for those of you who found today’s lecture valuable, and want to buy me a cup of coffee (THANK YOU!), or who want to go deeper in the class, the class readings, video assignments, and virtual chat-based office-hours details are below.

Keep reading with a 7-day free trial

Subscribe to Thinking Freely with Nita Farahany to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Nita Farahany
Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture