The Law That Kept Getting Blocked, and the Strategies That Might Actually Work ((Inside my Advanced Topics in AI Law and Policy Class #9.2)
children and online safety part 2 of 3
8:30 a.m., Wednesday. Welcome back to Advanced Topics in AI Law and Policy, where you have a seat in my class alongside my Duke Law students in the live class. This week, we are diving into laws targeting online safety for children, the logic and the limits of different regulatory approaches.
On Monday we mapped the terrain of laws to protect children online, with three different regulatory strategies, each resting on a different theory of harm, each facing different obstacles. We left with a question about Australia’s ban, asking, if even a complete under-16 prohibition can be circumvented with a borrowed face and a cooperative parent, what does any of this actually accomplish? We also left with a genuine tension between parents who want authority over their children’s digital lives and children who sometimes need privacy from those very parents.
Then on Tuesday, a New Mexico jury found that Meta violated parts of the state’s Unfair Practices Act on accusations the company hid what it knew about the dangers of child sexual exploitation on its platforms and impacts on child mental health. The jury agreed with allegations that Meta made false or misleading statements and also agreed that Meta engaged in “unconscionable” trade practices that unfairly took advantage of the vulnerabilities of and inexperience of children.
And today, in the bellwether case in LA, a jury found YouTube and Meta negligent in the design of their products, and that their negligence was a substantial factor in causing harm to the plaintiff, and that they failed to warn of the harms. Suffice it to say … these verdicts are a BIG deal in the “architecture” vs. “content” wars against social media companies.
So in class today we’re going to deeper into the laws (rather than the lawsuits) that are targeting platform reform, take seriously the arguments both for and against platform-level mandates, and set up Friday’s constitutional analysis by getting specific about what the courts have and haven’t resolved.
One quick thread and reminder from Monday’s class before we go further. The two competing harm theories, the content-based theory versus the architectural theory, aren’t just an academic framing question. They determine what counts as a “less restrictive alternative” under constitutional analysis. That phrase matters because when courts review laws that restrict speech, they ask whether the government could have achieved the same goal with a less burdensome approach. If the harm is content-based, telling parents to install filters is a plausible less-restrictive answer. If the harm is architectural, it isn’t. Keep that distinction in mind as we work through the legal strategies today. (And in case you’re wondering, yes I think the companies will appeal in both the NM and LA cases, and that the First Amendment and Section 230 claims will be front and center in those appeals).
I. The Design Code Approach: The UK Model
The UK Information Commissioner’s Children’s Code, in effect since September 2021, applies 15 design standards to any service “likely to be accessed by children”: high-privacy settings by default, no nudge techniques, no behavioral profiling (using data about how someone behaves online to build a predictive model of their preferences and vulnerabilities) without demonstrable need, geolocation off by default, data minimization mandatory. The code does not require age verification. It requires platforms to design as if children might be present, which means everyone gets the child-appropriate environment unless the platform can demonstrate a compelling reason otherwise.
Proponents point to a significant advantage of this approach, that settings-based mandates don’t require platforms to make judgments about what content harms children. “Configure default privacy settings to the highest level” is a concrete, administrable requirement. “Disable nudge techniques” is specific enough to comply with and enforce.
The UK code’s enforcement record is, however, mixed. The Information Commissioner’s Office (the UK’s independent data protection regulator, a government body with authority to investigate companies and impose fines) has issued guidance and begun assessments, but platforms have primarily responded through policy updates rather than fundamental architectural changes. Critics argue the code has produced substantial paperwork compliance without transforming the designs that cause harm. Others argue it is too early to assess long-term behavioral change and that the code’s real influence has been on how new products are designed rather than on retrofitting existing ones.
The code’s institutional advantage is a regulator with ongoing authority to update standards as technology changes. The United States has no direct equivalent. The Federal Trade Commission, or FTC, is the closest analog: a federal agency with power to investigate companies and bring enforcement actions for deceptive or unfair practices. But the FTC’s authority is primarily litigation-based rather than built around ongoing regulatory standards, which means US child safety legislation relies more heavily on specific statutory language and court battles. By the time a case is decided, the technology has often moved on.
II. What California Did Differently, and Where It Created Problems
California’s Age-Appropriate Design Code Act, often abbreviated as the CAADCA, attempted to go beyond settings mandates. It required covered platforms to create Data Protection Impact Assessment reports, or DPIAs, identifying risks to children across eight enumerated factors, including whether the platform’s design could expose children to “harmful or potentially harmful content,” whether algorithms could harm children, and whether features were engineered to increase compulsive use. Platforms were then required to mitigate any identified risks before offering the service to children.
The mitigation requirement introduces a different kind of obligation than the settings mandate. To identify what content is “potentially harmful to children” and take steps to prevent children from seeing it, a platform has to make a content judgment. It has to determine, under penalty of $7,500 per intentional violation per child, which speech children should or should not encounter. The courts, specifically the courts and not the advocates or the legislators, determined that this is where the First Amendment line falls, as we’ll see in detail on Friday.
One student in the live class raised a question that goes to the heart of the drafting challenge. Would laws mandating more specific content prohibitions, for example prohibiting platforms from showing children pornographic materials or content depicting graphic violence, survive constitutional challenge more easily than laws giving companies open-ended discretion about what to restrict? This is a genuinely important question, and the answer turns on whether specificity removes the platform’s discretionary editorial role. We’ll see what the court said about this.
III. COPPA 2.0: The Theory Behind a Disclaimer
COPPA 2.0 extends the original COPPA’s protections to teens aged 13 through 16, prohibits targeted advertising to children and teens, replaces “actual knowledge” with “knowledge fairly implied on the basis of objective circumstances” (a change that matters because platforms have historically claimed they didn’t know certain users were minors even when the evidence strongly suggested otherwise), and adds a right for teens to delete their own data. As we discussed on Monday, it passed the Senate unanimously.
Section 1306(f)(2)(B) states that the law cannot be construed to require an operator to “implement an age gating or age verification functionality.” That is a deliberate choice reflecting a particular theory that if platforms can’t collect, use, or share personal information of children and teenagers for targeted advertising and behavioral profiling, the precision of algorithmic recommendations diminishes regardless of the user’s age. One student in the live class independently articulated this logic: “if we can’t ban dark patterns, we can ban the data collection that makes dark patterns effective.” That is roughly the COPPA 2.0 theory.
Whether the theory holds depends on contested empirical claims. Proponents argue that removing behavioral data from the recommendation equation reduces the platform’s ability to identify and extend engagement with vulnerable users. Critics, including the Information Technology and Innovation Foundation, argue that restricting personalized advertising reduces the revenue that supports free services, potentially pushing children toward paid alternatives that are less accessible to lower-income families, and that this approach targets advertising delivery rather than the content and design features that cause the most documented harm.
The strategy also leaves architectural features like infinite scroll and autoplay untouched. Whether that matters depends on which harm theory you found more persuasive on Monday.
IV. The Case Against New Mandates
Before we get to Friday’s constitutional analysis, it’s worth engaging seriously with the argument that the problem is real but the proposed regulatory solutions create their own harms, and that existing legal mechanisms, properly enforced, could accomplish most of what legislators are trying to do through new mandates (see e.g. the verdict yesterday in New Mexico).
This argument is advanced by serious policy organizations and constitutional scholars, and it deserves engagement on its own terms. It has three parts.
1. The fraud theory. Platforms have represented to users, parents, and regulators that they care about user safety and that their practices conform to their stated policies. When those representations are false, when internal research documents harm but is not disclosed or when community standards claim protections that aren’t operationalized, that is a deceptive trade practice under FTC Section 5, the provision of the Federal Trade Commission Act that prohibits companies from engaging in unfair or deceptive acts in commerce. No new speech restrictions are required, and no government agency needs to design what content is harmful. No age verification infrastructure is required, and the platform is held to its own stated commitments. The FTC’s 2023 complaint against Meta pursued exactly this theory. The jury verdict yesterday in New Mexico vs. Meta follows a similar theory, arguing that “contrary to Meta’s public commitments, expose[s] [children'] to dangerous content related to eating disorders and self harm.”
2. The product liability theory. Design choices like infinite scroll, autoplay, notification engineering, and streak mechanics are product decisions that may foreseeably harm certain users. Courts considering the ongoing social media MDL litigation have found that some design claims can proceed. MDL stands for Multi-District Litigation, a procedural device that consolidates thousands of similar lawsuits from across the country into a single court so they can be handled efficiently. This particular MDL involves families of children who claim they were harmed by social media design choices, and it is one of the largest litigation efforts in this space. Courts have found that some design claims survive Section 230, the federal law that generally shields platforms from liability for content their users post. The key distinction being that you can’t sue a platform for what someone said on it, but you may be able to sue it for how the platform was built. Proponents argue a court can hold a platform liable for harmful design without deciding anything about content. Critics point to the difficulty of establishing causation. Proving that a specific design feature caused specific harm to a specific child is slow, expensive, and uncertain, and the tobacco litigation analogy suggests timelines measured in decades. We are all anxiously awaiting the verdict in the landmark LA social media case built on this theory.
3. Transparency plus private ordering. Require platforms to disclose publicly and in machine-readable form how their algorithmic systems work, what data they collect, and what design choices optimize for engagement. Parents, app stores, third-party tools, and researchers then act on that information. This is the algorithmic equivalent of nutrition labeling where the government ensures honest disclosure; markets and individuals then make decisions based on transparent information. This approach has a favorable constitutional profile because courts generally allow the government to require factual commercial disclosures without treating that as the kind of speech restriction that requires demanding justification.
Proponents argue this approach respects individual autonomy and avoids government overreach into content. Critics argue that transparency works well for motivated, informed actors who have the resources and standing to act on what they learn, and that for families who lack any of those conditions, disclosure without enforcement changes little. The disagreement is ultimately empirical: how many families need active regulatory protection versus how many can effectively self-protect with good information?
V. Where This Leaves Us Before Friday
Each approach carries real costs and real benefits, and the honest answer is that none of them individually solves the problem.
Access restriction can prevent some harm for children whose parents are willing and able to exercise the consent mechanism appropriately. It also creates data infrastructure with its own risks, and its protective effect may be limited for children most in need of privacy from their parents.
Design code mandates can make platforms safer by default without requiring age verification or parental gatekeeping. Whether settings-based mandates produce real behavioral change in engagement patterns, rather than surface compliance, remains contested.
Data minimization can reduce the precision of targeted manipulation if behavioral data is genuinely reduced. It may also reduce the quality or accessibility of free services and leaves engagement architecture untouched.
Fraud enforcement and product liability are constitutionally favorable but slow. Transparency requirements respect autonomy but depend on informed and motivated actors to translate disclosure into changed behavior.
On Friday, we look at what the courts have actually resolved and what they’ve left open. The specific line the Ninth Circuit drew, in an opinion released eleven days ago, may be the most practically important development in this area of law in years.
Before then, head over to Jacob Ward at the Rip Current to catch the latest on the New Mexico case and verdict and the LA trial. Plus, catch our podcast together going geeky on these issues dropping later today.
And if you’re in Charlotte, North Carolina tomorrow, join me and Nicholas Thompson at Queen’s College discussing AI and the Future of Everything.
Share this with anyone who has a teenager in their life or a stake in how this turns out.
Class Dismissed. See you Friday.
The entire class lecture is above, but if you’d like to support my work or go deeper in your learning, please upgrade to being a “paid subscriber.”
Paid subscribers also get access to class readings packs, discussion questions, bonus content (like where I am speaking each week and photos from events), full archives, virtual chat-based office hours, additional readings, as well as one live Zoom-based class session per semester.



