Thinking Freely with Nita Farahany

Thinking Freely with Nita Farahany

When Anyone Can Fake Anything (Inside My AI Law & Policy Class #14)

Deepfake Technology and the Post-Truth World

Nita Farahany's avatar
Nita Farahany
Oct 20, 2025
∙ Paid

8:55 a.m., Monday. Welcome back to Professor Farahany’s AI Law & Policy Class! I hope you had a great fall break. Let’s begin with a short snippet from the live class:

Wait … is that actually me? In my actual Duke Law classroom?

Loading...

Perhaps a little context here would help. On September 30, 2025, OpenAI released Sora 2, an invite-based Sora social app (initial rollout U.S. & Canada). Sora 2 produces photorealistic video with synchronized dialogue and sound effects and better physical realism—if a basketball player misses a shot, the ball rebounds off the backboard instead of spontaneously teleporting to the hoop. Users can upload themselves once and appear in generated scenes via “cameos,” and they can choose to share that cameo with friends.

Oh, and it’s also a social app. Think TikTok, but where every video is AI-generated and you can insert friends who’ve shared their “cameo” with you. They can revoke access or remove videos later, but once something is generated and shared, the potential for harm exists.

That video from “class”? I created that a few minutes before the live class in my office, using Sora2 on my phone. I read off 3 numbers, looked left, right, and up. And voila, a video (and my likeness now being training data forever?).

But before I panic about what I just shared with you and Sora 2, let’s focus on some of the harms from deepfakes that have already arrived. 90-95% of deepfake videos created since 2018 are non-consensual pornography of women. Not election interference. Not corporate fraud. Not international espionage. Just people targeting primarily women with AI-generated sexual content at scale.

Consider this your official seat in my class—except you get to keep your job and skip the debt. Every Monday and Wednesday, you asynchronously attend my AI Law & Policy class alongside my Duke Law students. They’re taking notes. You should be, too. And remember that live class is 85 minutes long. Take your time working through this material.

Just joining us? Go back and start with Class 1 (What is AI?) and Class 2 (How AI Actually Works), and check out the full syllabus here.

Loading...

The live students (your Duke Law counterparts) instantly thought of our prior conversation about Jennifer DeStefano who thought her daughter was kidnapped because of an AI scam, and about the growing prevalence of deepfake pornography. (One student even knew of a friend who had received deepfake images of herself). Another explained how her brother is using it as part of his fantasy-league to generate images with their favorite players.

So today, as we grapple with questions like these, we’re tackling four questions:

  • What are deepfakes, exactly? (It’s more than just video)

  • How are they actually made? (It’s easier than you think)

  • Where are the real harms occurring? (Not where the headlines suggest)

  • Why does detection keep failing? (The answer will make Wednesday’s discussion much more interesting)

On Wednesday, we’re going to examine how governments and industry are trying to respond. Ready? Let’s start with a question that seems simple but isn’t.

I. What Are Deepfakes? (And Why the Answer Matters for Governance)

In September of 2017 a Reddit user named “deepfakes” posted a series of videos to the CelebFakes subreddit in a thread requesting manipulations of Game of Thrones actress Maisie Williams. The footage clearly featured a virtual recreation of Williams’s face—striking enough that someone asked deepfakes to share the algorithm he was using.

As deepfakes posted more simulations, including one featuring Emma Watson, more requests for the AI source followed. Eventually, deepfakes launched a new subreddit specifically for these video celebrity swaps—r/deepfakes—and released the script for his face-swapping process onto Reddit.

That’s the origin story. One Reddit user weaponized AI to create non-consensual pornography of celebrities, then shared the tools so anyone could do it.

And according to Sensity AI, that original use case is still the dominant one. Ninety to ninety-five percent of deepfake videos created since 2018 are non-consensual pornography of women. Researchers have found Telegram ecosystems that facilitate generation and sharing of this content, with particular concerns about minors being targeted.

A. Deepfakes vs. Cheapfakes

When governments try to regulate “deepfakes” or “synthetic media” or “AI-generated content,” they’re often using these terms interchangeably. They’re not the same thing. And as we’ve learned throughout the class, when it comes to governing AI, the technical precision matters.

Synthetic media is the umbrella term covering everything from basic Photoshop to AI generation. Some regulations target this broader category, and as we’ll see on Wednesday, each governance choice creates different problems. Regulate too broadly and you’ve criminalized the entire internet. Regulate too narrowly and you miss harms like the Pelosi video.

1. Deepfakes are AI/ML-generated realistic media of events that never happened. These are images, video, audio, and text that sit within the broader synthetic media umbrella.

2. Cheapfakes are basic manipulations with no AI. Remember the 2019 Nancy Pelosi video where someone slowed down her speech to make her sound drunk? That went massively viral. That was just video editing software and malicious intent, with no artificial intelligence required, and yet real damage was done.

How do you make a cheapfake? Open iMovie, Adobe Premiere, or any basic video editor. Slow down the footage or speed it up, cut and splice clips together out of context, or crop images to remove context. Or adjust the color or brightness to change the apparent mood. These are the same editing tools anyone uses for legitimate purposes, but in cheapfakes they’re used to deceive.

By contrast, deepfakes use AI and machine learning to create realistic but false media and requires AI models trained on large datasets (either models you train yourself (which requires technical expertise, computing power, and training data) or consumer-facing apps that have done this work for you.)

Loading...

There’s no clean answer. Every choice creates problems. Welcome back to AI governance. We remain pessimistic in class, even after a healthy fall break.

Share Thinking Freely with Nita Farahany

B. The Four Modalities (Because Video Is Only Part of the Story)

Images are often the easiest to make and can be harder to detect than you’d think. Researchers have documented GAN-generated persona photos used in influence campaigns—operations in Belgium around 5G discourse and in Lebanon, where state-linked actors deployed synthetic profile pictures to make entire fake networks seem credible.

But it tends to be video that gets the most headlines.

The timeline from the U.S. Department of Homeland Security Report, Increasing Threat of Deepfake Identities, provides some representative examples over time. And you can expect with the launch of Sora to see increasingly more of these examples that become less obvious and more troubling.

A timeline of people on a white background

AI-generated content may be incorrect.

One of the biggest fears going into the 2024 elections was that deepfakes would become a huge issue. Preet Bharara and I did a deep dive into this on his podcast in May of 2024, using a hypothetical case to see whether existing laws would address the use of deepfakes by a political candidate.

But as Mark Scott reported in POLITICO, despite enormous fears about election manipulation, there’s “little if any evidence” that video deepfakes actually skewed election outcomes in Pakistan or Indonesia. We’ll come back to this gap between fear and evidence.

Audio might be the scariest fraud vector because we trust phone calls. The Biden robocalls in New Hampshire telling Democrats not to vote. Audio deepfakes of UK Labour leader Keir Starmer and Slovakia’s Michal Šimečka spreading on social media before being debunked.

Text is what almost nobody talks about, but it might matter most for volume. AI-generated news articles, social media posts, comment sections flooding with targeted content. The DHS report notes that text deepfakes are not so easy to detect in practice. When deepfakes are text, the volume is basically infinite.

Mark Scott’s POLITICO reporting highlights how the “boring” machine learning for microtargeting—granular targeting using socioeconomic data—might matter more than flashy video deepfakes for actual influence operations.

The math is sobering. This class has over 9,200 participants. If each of you generated just ten election-related deepfake videos today, that would result in 92,000 new pieces of false media in one day from one class. Multiply that by billions of people with smartphones worldwide.

Which means the governance question isn’t “can people make deepfakes?” (that barrier has fallen dramatically). It’s instead how do you govern when anyone can generate hyper realistic fake media faster than it can be fact-checked?

Loading...

II. How Are They Actually Made? (It’s Easier Than You Think)

What once required significant technical expertise and expensive equipment to generate fake media now requires neither. Let’s look at the three core techniques:

A. Face-Swapping (Where It All Started)

Think of face-swapping like creating a digital mask that moves. The AI studies hours of video of both people, learning every angle, every expression, how their features move. Then it takes Person A’s movements and reconstructs them using Person B’s face. Same expression, same timing, different face.

Tools like FaceShifter, FaceSwap, DeepFaceLab, and Reface do all of this automatically. Real-time filters on Snapchat and TikTok use the same underlying technology.

The harms are exactly what you’d expect. According to the DHS report, Increasing Threat of Deepfake Identities, non-consensual pornography has targeted actresses like Kristen Bell and Scarlett Johansson, with some “leaked” fakes getting over 1.5 million views. But the report also emphasizes the disproportionate impact on private individuals versus public figures (celebrities have resources to fight back.) Private citizens often don’t. The DHS report documents Noelle Martin’s case, where attackers used a tool called “Reflect” to create face-swap images in five minutes, then sent them to her family and friends and posted them online, with a lifetime of consequences.

B. Lip-Syncing (Making Anyone Say Anything)

This maps audio to video so a person appears to say new words. The technology called Wav2Lip achieves “speaker-independent” lip-syncing—meaning you can make anyone say anything without training the model on that specific person first.

How good is it? In human testing, people preferred Wav2Lip outputs over 90% of the time compared to either unsynced video or baseline methods.

Consider the enhanced social engineering scenario from the DHS report:

You’re the Finance Director at a midsize company. Your phone rings. It’s your CEO. You recognize her voice—you’ve worked together for five years. She says there’s a confidential joint venture opportunity. Time-sensitive. She needs you to wire $250,000 to a specific account today. She mentions something personal that makes you confident it’s really her—maybe something about your kid’s soccer game.

So you wire the money.

Except it wasn’t her. The attacker cloned her voice from public earnings calls and podcasts. The personal details came from LinkedIn and Facebook. All public information, just weaponized.

Loading...

In the moment, with your CEO saying it’s urgent and mentioning personal details, would you actually think to verify? Or would you trust your ears?

One of the live students said they had accidentally done this with Wells Fargo recently, calling them back to verify it was really them. Another student said that their law firm did this as a training exercise — if a client calls asking for information, the lawyer has to call them back at a known number. After reading this, what precautions are you going to put into place to safeguard against this kind of fraud?

C. The Puppet Master (Full Body Control)

This uses GANs—generative adversarial networks—to give one person control over another person’s digital image. One “master” performer does movements that drive the “puppet” target’s facial and body movements.

The system is adversarial by design. Better detection leads directly to better generation. It’s an arms race baked into the technology itself.

The Tom Cruise TikToks that went widely viral in 2021. Channel 4’s deepfake of Queen Elizabeth’s Christmas message in 2020. David Beckham’s anti-malaria PSA appearing to speak nine languages fluently.

We spent some time in the last class talking about the last one. Does that show not all deepfakes are harmful? Some are educational, artistic, or enable accessibility. But as one of your live counterparts pointed out, if AI dubs movies and PSA announcements, are those voice actors in other languages who are now out of a job? And how do we balance first-amendment freedom of speech concerns with regulation? (We’ll tackle some of these questions on Wednesday).

The technology doesn’t have to be perfect to cause harm. Even mediocre fakes can fool people. Even debunked fakes leave lasting damage.

Think about Sora 2’s “cameos” feature. You upload yourself once. Now anyone you’ve given permission to is able to insert you into any video. Your friend creates a funny video of you skydiving for harmless fun. But what if they generate something else? What if their account gets hacked? What if they turn malicious?

Loading...

III. The Reality Gap (What We Fear vs. What’s Happening)

Mark Scott’s reporting in POLITICO documented what global AI companies and election monitors actually found in 2024:

  • Nick Clegg from Meta said trends have shown “not anything wildly out of order” so far in 2024, while cautioning the situation could change.

  • Josh Lawson from the Aspen Institute noted that despite extensive convenings (including events with Hillary Clinton and journalist Maria Ressa), there’s been “no demonstrated case” showing AI disinformation directly changing nationwide voting behavior.

  • Elections in Pakistan and Indonesia showed “little if any evidence” that AI skewed outcomes.

But before you relax, let’s take a look at where the harm today is actually concentrated.

A. Non-Consensual Intimate Images

Remember that 90-95% of deepfake videos since 2018 are non-consensual pornography of women, according to Sensity AI. Researchers describe Telegram ecosystems facilitating generation and sharing, with concerns about minors being targeted.

Cara Hunter was running for office in Northern Ireland in 2022. Weeks before the election, she received a forty-second explicit deepfake video of herself. She faced harassment. She still won, but by a narrow margin. She describes “lifelong reputational harm” in interviews and in her TED talk. Even though she won, even though it was obviously fake, that content is now permanently out there.

Will Sora 2’s social app and cameo features make this harm vector even easier?

B. The Scariest Scenario (for me as a mother)

Remember our discussion of virtual kidnapping, where an attacker generates synthetic “proof of life” video of your family member in distress when there’s been no actual abduction. With Sora 2, they can create realistic video that makes families unable to trust photographic evidence that their loved ones are safe.

We’ve created a technology that means you can’t trust evidence that your child is okay.

C. The Quality Paradox

Mark Scott’s POLITICO reporting documented that a Russian-backed deepfake about Ukrainian President Zelenskyy had linguistic errors and terrible lip-sync so it was quickly debunked. The same has been true of obviously fake images of Trump.

Researcher Felix Simon from Oxford argues that skepticism plus content saturation actually limits spread of obvious fakes, where perfect fakes get scrutinized and debunked.

But the DHS report says that the threat driver is human credulity, and that the content itself doesn’t need to be advanced to be effective.

The truth is, we don’t actually know how this technology will impact information ecosystems at scale. We have theories and case studies. But the systemic impact is uncertain. And Sora 2 just made the experiment much larger.

IV. Why Detection Keeps Failing (And Why Wednesday’s Governance Conversation Is So Hard)

You understand what deepfakes are, how they’re made, and where the real harms are occurring. Now we need to talk about why we can’t reliably detect them. Because every governance mechanism we’ll examine Wednesday assumes we can identify deepfakes. That assumption might be wrong.

A. Can You Spot a Deepfake? (You Probably Can’t)

The DHS report includes detection cues. For images and video, there is localized blurring, skin-tone seams, double chins or edges, occlusion blur, inconsistent quality, boxy artifacts near mouth or eyes, unnatural blinking, background inconsistencies, context mismatches.

For audio there are choppy sentences, inconsistent prosody, odd phrasing. For text: poor flow, incongruent phrasing, context mismatches.

In the live class, we pulled up the website Detect Fakes so that we could try out together whether we could detect a deepfake or not.

Take a minute and try it out yourself. Seriously! Pause here, and try it out to really understand what we’re all grappling with. Do at least four images.

Even with a checklist, even with time to examine closely, what was your success rate? In the live class, we were 2 for 4. That’s no better than chance. And the certainty level ranged from “very high” to “perfectly” certain. Did you do better? It was disconcerting for us to see that even with developing expertise, we were easily fooled.

What does that tell you about the world we are now facing?

B. The Arms Race Problem

Remember GANs? Generator versus discriminator training against each other. That’s the core mechanism.

Now apply that to detection at scale. Researchers develop better detection tools. They publish them. Deepfake creators study the methods. They modify generation techniques to evade detection. New deepfakes are harder to detect. Researchers develop new methods. The cycle continues.

You’re in an adversarial arms race where your opponent gets better every time you publish your defense strategy.

The DHS report emphasizes that detection-only approaches are inadequate and increasingly reactive as deepfakes spread. Which means that mitigation must be multi-pronged—combining technology, education, and regulation with models that require constant retraining.

Every law we examine Wednesday will need to grapple with these challenges:

1. Technical Detection Is Inadequate

Even when detection works, it’s slow (fact-checking takes hours; virality takes minutes). It’s resource-intensive (doesn’t scale to billions of daily uploads). It’s probabilistic (“87% likely to be synthetic”—is that enough to criminalize?). It’s adversarial (every improvement leads to generation improvements).

On Wednesday, we’ll look at the industry pledges from the Munich Security Conference where over twenty seven tech companies committed to automated labeling of AI imagery and video for election fraud detection. But if you can’t reliably detect which content is AI-generated, how do labeling systems work in practice?

2. Attribution Is Often Difficult

When you encounter a deepfake, who made it? Trace to an IP address—they used a VPN. Trace to a person—they claim their account was hacked. Trace to a tool—the tool maker claims no responsibility for misuse.

The DHS report notes that attribution is often difficult and slows remedies. With Sora 2’s social app, there’s more attribution than before (accounts, social graphs), but that doesn’t solve the fundamental problem. If you can’t reliably identify who created what with which tool, assigning liability becomes extremely challenging.

3. Scale Overwhelms Verification

Billions of people have smartphones. Every one of those can generate deepfakes using Sora 2 or similar tools. Content created can be shared instantly versus maybe thousands of professional fact-checkers globally. Each fact-check takes hours to days.

The ratio is millions to one, where content is created faster than it can be verified.

A politician is caught on video taking a bribe. The video is real, and authenticated by experts, with the metadata checking out as authentic. But the politician says: “That’s obviously a deepfake. Everyone knows this technology exists. You can’t trust videos anymore.” And a significant portion of the public doesn’t know what to believe.

The politician escapes accountability not because people believe a fake, but because they believe nothing.

The is the “liar’s dividend,” as described by legal scholars Chesney and Citron.

A diagram of a company's process

AI-generated content may be incorrect.

Source: https://www.brennancenter.org/our-work/research-reports/deepfakes-elections-and-shrinking-liars-dividend

The DHS report includes an even more sophisticated variant, where a sophisticated actor could recreate a real historical event with intentionally detectable “fake” signatures to cast doubt on the authentic record.

So is this the real threat? Not that we’ll believe false things, but that we’ll stop being able to identify true things? Where truth becomes impossible to establish?

4. The Mitigation Lifecycle (Your Scaffold for Wednesday)

Here’s a framework from the DHS report that I want you to review, as we’ll use it extensively on Wednesday. The report maps deepfake creation and spread through stages, with different intervention opportunities at each. On Wednesday, we’ll look at how laws are targeting different stages of deepfakes, and consider where the best points of interventions may be for law and policy:

A diagram of a company

AI-generated content may be incorrect.

Source: DHS Report, Increasing Threat of Deepfake Identities

  1. Intent: Policy/law can create deterrence through criminal and civil penalties

  2. Research: Organizational readiness and monitoring

  3. Creating the model: Developer responsibilities for releasing model signatures

  4. Dissemination: Platform partnerships and detection tools

  5. Viewer response: Education on verification

  6. Victim response: Reporting channels and support resources

On Wednesday, we’ll test each governance mechanism against this lifecycle to see where it plausibly works, and where it fails.

Your Homework

1. Share This Class: Share this lecture with someone who needs to understand what deepfakes actually are. Most people just saw headlines about Sora 2. They don’t know about the 90-95% non-consensual porn statistic. They don’t understand the liar’s dividend.

Share

2. The Sora 2 Reality Check: Go look at OpenAI’s Sora 2 announcement. Watch the example videos. Read about the cameo feature. Then ask yourself:

Would you use this tool? For what? Would you upload your likeness? Who would you trust with access to your cameo? What could go wrong? Discuss this with your friends or family over dinner tonight.

Class dismissed.

The entire class lecture is above, but for those of you who found today’s lecture valuable, and want to buy me a cup of coffee (THANK YOU!), or who want to go deeper in the class, the class readings, video assignments, and virtual chat-based office-hours details are below.

User's avatar

Continue reading this post for free, courtesy of Nita Farahany.

Or purchase a paid subscription.
© 2026 Nita Farahany · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture