The Statute That Couldn’t Stretch (Inside my Advanced Topics in AI Law and Policy Class #7.2)
Persuasion and Manipulation (Class 2 of 3)
8:30 a.m., Wednesday. Welcome back to Professor Farahany’s Advanced Topics in AI Law and Policy Class! We’re on Week 7 of class. Make sure to take Monday’s class #7, The Persuasion Exchange. This class builds on that material. If you’ve caught up already, keep going!
Try to cancel a free trial sometime. Not the kind that lets you click “cancel” and move on. The kind where you signed up in two taps but now have to navigate four screens, decline three special offers, and call a phone number that’s only staffed during business hours. You know the feeling. You signed up for something easy and now you’re trapped in a maze designed to make you give up and keep paying.
Last week, a legal advisory warned businesses that the FTC and state regulators are now treating these kinds of design tricks as violations of consumer protection law. The maze you just imagined? Regulators have a name for it. They call it a “dark pattern,” and they’re starting to crack down.
But what makes this week’s assigned reading so jarring in comparison is that in 2022, a federal court looked at one of the most recognized dark patterns in digital commerce, the loot box, and systematically closed every legal avenue for challenging it.
On Monday, we watched that court shut the door in Coffee v. Google. The plaintiffs couldn’t show that virtual items had real-world “value.” Google was shielded by platform immunity. The court couldn’t find that anyone had been harmed in a way the law recognizes. And the court’s parting message was reasonable, even humble: if loot boxes are harmful, the legislature should act. Don’t bring this to us. Bring it to Congress.
That sounds reasonable. But there’s a catch. What if the laws Congress already wrote can’t reach the problem either? What if the entire legal infrastructure was designed for technologies that no longer exist?
Today, we examine what happens when law tries to stretch, and when it tries to start fresh. We’ll look at a Supreme Court case about text messages, the government’s most flexible consumer protection tool (which it hasn’t fully used), and a bill that would have solved the legal problems Coffee identified. The bill was introduced in 2019. It never became law.
Welcome to the regulatory gap.
The Text Message Case
The Telephone Consumer Protection Act was written in 1991 to stop a specific nuisance. Companies had developed machines that dialed random blocks of telephone numbers automatically, tying up emergency lines and harassing consumers at scale. Congress banned “automatic telephone dialing systems” and defined them with reference to machines that use random or sequential number generators.
Fast forward to the 2010s. Noah Duguid receives unwanted login-alert texts from Facebook. He never created a Facebook account, but someone linked his phone number to one. He can’t stop the messages. He sues under the TCPA.
Here’s the problem. Facebook’s notification system stores phone numbers in a database and texts them automatically. But it doesn’t dial random numbers. It sends targeted messages to specific accounts. Is it an “automatic telephone dialing system” under the statute?
The Supreme Court, 8-0 (Justice Sotomayor writing, Justice Alito concurring separately), in the case of Facebook, Inc. v. Duguid, 141 S. Ct. 1163 (2021), said no. The statute’s definition is tied to random or sequential number generation. Facebook’s system doesn’t use either. The text of the law doesn’t cover what Facebook does, no matter how annoying those login alerts are.
What makes this case important for us is not the grammar. It’s the consequence. Sotomayor acknowledged that the narrow reading might leave a gap in consumer protection, but wrote that the Court could not “rewrite the TCPA to update it for modern technology.” That is Congress’s job.
Now feel the parallel. In Coffee, gambling statutes written for slot machines couldn’t reach loot boxes because virtual items aren’t “things of value.” In Duguid, a communication statute written for random dialers can’t reach targeted notification systems because they don’t use random number generators. In both cases, a law drafted for an older technology is asked to cover a newer one. In both cases, the court says the fit isn’t close enough. In both cases, the court tells the plaintiffs to take it up with Congress.
We have seen this pattern throughout the semester. In Week 3, existing consumer protection law couldn’t reach dark patterns in subscription cancellation flows, which is why the DETOUR Act was proposed. In Week 4, Section 230 shielded social media platforms from liability for algorithmic amplification. In Week 5, no federal law governed companion chatbot design even as students in this class reported feeling genuine emotional attachment after a few days of use.
Technology moves faster than statutory language. The question is always who closes the gap and how.
Justice Alito’s concurrence is worth flagging even though it’s short. He joined the result but worried about the methodology. The majority relied on a grammar rule (a modifier at the end of a list applies to every item in the list) and applied it rigidly. Alito cautioned that grammar rules are useful guides, not mechanical formulas, and that meaning ultimately depends on how ordinary people use language. He sees the risk of building regulatory outcomes on parsing exercises. If courts won’t stretch old statutes to cover new technologies, and Congress moves slowly (or not at all), the gap grows with each technological generation.
The Tool Nobody Uses
So the gambling statutes can’t reach loot boxes. The TCPA can’t reach modern notification systems. Both courts punt to Congress. But there is one legal tool that was specifically designed to be flexible enough to reach unfair and deceptive practices regardless of the specific technology involved.
The Federal Trade Commission is the federal agency charged with protecting consumers from unfair business practices. Think of it as the government’s consumer protection enforcer. And its primary weapon, Section 5 of the FTC Act, prohibits “unfair or deceptive acts or practices in or affecting commerce.” Unlike the gambling statutes or the TCPA, this language is deliberately broad. It was written to fill gaps. It doesn’t require that the challenged practice be gambling. It doesn’t require a “thing of value.” It doesn’t require a random number generator. It just requires unfairness or deception.
The FTC can go after a business practice in two ways under Section 5. It can call it unfair, or it can call it deceptive. Each has its own test, and both are worth understanding, because this is where the real regulatory potential for addressing persuasive design lives.
The “unfair” path is the more promising one for loot boxes. The FTC asks three questions. First, does the practice cause real harm? For loot boxes, think aggregate overspending by minors, cumulative financial losses, exploitation of psychological vulnerabilities. The answer is probably yes. Second, could consumers reasonably avoid the harm? This is the question that should make your ears perk up. Can you “reasonably avoid” spending too much on loot boxes when the odds are hidden, when virtual currency makes it hard to track what you’re actually spending, and when the reward system is engineered to trigger the same brain pathways as a slot machine? Think about the student who reached for her water bottle involuntarily when her partner sent a reminder. If a text about hydration can bypass your conscious decision-making, what does a system engineered to exploit those pathways do to your ability to just say no? The behavioral evidence from the CMA paper (assigned reading, discussed Friday) suggests avoidance is often illusory. But the FTC has historically been reluctant to conclude that consumers can’t protect themselves. That conclusion sounds paternalistic, and the agency knows it. Third, is the harm outweighed by benefits? Here the industry pushes back. Free-to-play games funded by microtransactions entertain millions. Loot boxes fund game development. This tradeoff is genuinely contested.
The “deceptive” path is narrower but still viable. The strongest theory is failure to disclose odds. If a loot box has a 0.1% chance of producing a rare item and that probability is never revealed, most people will dramatically overestimate their chances of winning. That’s a material omission, meaning it would change your purchasing decision if you knew the truth. China has required odds disclosure since 2017. (Not assigned, for reference.)
So Section 5 could reach loot boxes. Both paths lead somewhere. Why hasn’t the FTC walked down either one?
(1) First, only the FTC can bring a Section 5 case. You or I can’t sue a company for “unfair practices” under this law. The Coffee plaintiffs couldn’t use this tool because it wasn’t available to them.
(2) Second, the FTC has gone after dark patterns under Section 5. Remember the $245 million Epic Games/Fortnite settlement we discussed in Week 3. But that case targeted the design of the purchase interface (one-click buying without confirmation), not the loot box mechanic itself. The FTC hasn’t pursued a standalone theory that loot box randomization is unfair.
(3) Third, and perhaps most importantly, FTC enforcement priorities shift with presidential administrations. What gets prosecuted is as much a political question as a legal one.
One of your live counterparts observed something in her discussion post that I think captures the frustration perfectly. She wrote that the Coffee court “acknowledged the concerns around loot boxes were real and serious, but still threw the case out because loot boxes technically did not meet the definition of an illegal slot machine under California law.” Then she noted: “The court said that if this is a real problem it should be first properly established in the legislation.”
The court says the legislature should act. The legislature introduced a bill. Let’s see what happened.
The Bill That Might Have Fixed Everything
S. 1629, the Protecting Children from Abusive Games Act (116th Congress) looks at the loot box problem and asks a completely different question than Coffee. The bill never asks whether loot boxes are gambling. Never. It doesn’t mention the California Penal Code. It doesn’t try to establish that virtual items have “value.” Instead, it treats loot boxes and pay-to-win microtransactions as exploitative monetization of minors. The regulatory frame shifts from “are these gambling?” to “are these predatory?”
That shift matters. It’s the most important structural lesson in this week’s readings.
Two tiers of prohibition. For minor-oriented games, a categorical ban on publishing or distributing games with pay-to-win microtransactions or loot boxes. No balancing test. No case-by-case analysis. For games with mixed audiences, the prohibition extends to any game where the publisher has “constructive knowledge” that users under 18 exist. Analytics showing minor users. Marketing demographics skewing young. If you know kids are playing, you can’t include these mechanics.
The definitions section is where the bill does its real work. I want you to compare each definition to what Coffee required.
S. 1629 defines a “loot box” as an add-on transaction that, in randomized or partially randomized fashion, unlocks a feature or enhances entertainment value. No requirement that the reward be transferable. No requirement that it have monetary value. No “thing of value” inquiry at all. This definition would have overridden Coffee entirely.
S. 1629 defines an “add-on transaction” to explicitly include payment in virtual currency purchasable with money. This is the surgical fix for Coffee’s transactional separation problem. The court split the purchase into two steps (buy currency, then spend currency) and treated them as independent transactions. The bill treats them as a single monetization pipeline. Because that’s what they are.
S. 1629 defines “pay-to-win microtransaction” as an add-on transaction that eases progression, assists achievement, permits continued access after a timer (for progression games), or provides competitive advantage (for competitive games). Cosmetic items are excluded.
Notice what the bill does not require. It doesn’t require chance. It doesn’t require a wager. It doesn’t require a “thing of value.” It regulates monetization asymmetry, competitive distortion, and artificial progression barriers. This is a fundamentally different conceptual model from gambling law.
If S. 1629 had been law when Coffee was filed, the case comes out differently at every stage. Loot boxes are unlawful without the “thing of value” analysis. The virtual currency intermediary is irrelevant. The question isn’t whether the transaction was “fair” in a contractual sense but whether the mechanic is permissible at all.
The bill was introduced in 2019 by Senator Josh Hawley. It never became law. No successor has passed at the federal level. Several obstacles. The Entertainment Software Association opposed it aggressively. Under Brown v. Entertainment Merchants Association (2011), the Supreme Court held that video games are protected speech under the First Amendment, which means a categorical ban on a game mechanic would face the highest level of constitutional scrutiny and would likely need to pass an extremely demanding legal test to survive. The “constructive knowledge” standard is broad: if a company’s own data shows that minors are using the product, the prohibition kicks in, which in practice would capture virtually every game. And Congress, as we have established across six weeks of this course, moves slowly on tech regulation when it moves at all.
But several of your live counterparts raised a harder question about S. 1629’s approach, even if it had passed. One student observed that loot boxes involve “paying real money for randomized rewards,” which makes them easy to analogize to gambling. But “other persuasive tools may push users to spend time or money, but they usually do not involve paying for a chance outcome.” Another noted: “things like streaks or constant notifications or social pressure mechanics do not involve direct spending, so nobody really goes after them legally, even though they can mess with your behaviour just as much.”
They’re both right. A bill that targets only loot boxes and pay-to-win mechanics is treating the symptom. The disease is broader. Streaks, notifications, social comparison, infinite scroll, variable reward schedules in non-monetary contexts, these all operate through the same psychological pathways. S. 1629 would have solved the loot box problem. It would not have solved the persuasive design problem.
What’s Happening Elsewhere
While the U.S. has stalled, the rest of the world has been experimenting, and the results are messy.
Belgium classified paid loot boxes as gambling in 2018, requiring their removal under threat of fines up to €800,000 and prison sentences of up to five years. Game publishers complied, pulling loot box mechanics from Belgian versions of FIFA, Overwatch, and Counter-Strike. But the Netherlands tried a similar approach and was reversed by a court in 2022, which held that loot box prizes had no independent economic value. The reasoning tracks Coffee almost exactly.
China has required odds disclosure since 2017 and established daily purchase limits. Japan banned a loot box variant called “kompu gacha” that required collecting multiple items to combine into rarer prizes. Brazil’s child-safety law bans loot box sales to minors effective this month.
At the EU level, the proposed Digital Fairness Act (Q4 2026) could ban loot boxes or require parental consent. The EU AI Act’s Article 5 prohibits AI systems that use “subliminal techniques” or “purposefully manipulative or deceptive techniques” that could cause significant harm. Unlike S. 1629, which targets specific game mechanics, Article 5 tries to regulate the manipulative technique itself regardless of the product.
The European Tech Alliance argues that at least 13 existing EU laws already cover dark patterns and that the priority should be enforcing existing rules. This is a legitimate counterargument. Is the problem a shortage of laws or a shortage of enforcement?
What jumps out from the comparative landscape is that the “thing of value” question, the very issue that sank the Coffee plaintiffs’ case, is genuinely contested internationally. Belgium says loot box prizes have value. The Netherlands says they don’t. The UK Gambling Commission says they don’t qualify as gambling but recommends the government act anyway. Australia’s Senate committee recommended regulating loot boxes as gambling.
The conceptual framework keeps breaking. Which suggests the framework might be the problem.
The Pattern
Every framework we’ve examined defines the problem through the lens of what it was built to solve. Gambling law asks about chance and prizes. Consumer protection law asks about deception and injury. Administrative enforcement asks about unfairness. Each captures part of the picture. None captures the whole.
On Friday, we’ll look at a framework from the UK that reframes the question entirely. And then we’ll confront what happens when the persuader isn’t a human-designed loot box system but an AI that can personalize its persuasion to your individual psychological profile, adapt in real time to your resistance, and operate continuously without fatigue.
If the law is already losing the fight against static persuasive design, what happens when the design starts learning?
Class Dismissed. See you Friday.
The entire class lecture is above, but if you’d like to support my work or go deeper in your learning, please upgrade to being a “paid subscriber.”
Paid subscribers also get access to class readings packs, discussion questions, bonus content, full archives, virtual chat-based office hours, additional readings, as well as one live Zoom-based class session per semester.



