Thinking Freely with Nita Farahany

Thinking Freely with Nita Farahany

Your Meditation App May Soon Have More Legal Protection Than Brain Surgery.

Nita Farahany's avatar
Nita Farahany
Aug 05, 2025
∙ Paid

Note: This article examines the unintended consequences of neural privacy laws in California, Colorado, Montana, and Connecticut as of mid-2025. While these laws aim to protect our brain data, their implementation details create surprising barriers to beneficial technologies. The specific requirements and interpretations discussed here reflect my current understanding of these complex, evolving regulations.

Imagine that you come home, exhausted after work, and slip on a simple headband that helps you meditate by showing whether your brain is calm or stressed. Nothing invasive—just basic feedback, like a mood ring for your mind. Now imagine that this wellness gadget requires more legal paperwork than a surgeon literally opening your skull to implant a computer chip.

This is what’s happening right now in America, where well-meaning privacy laws have created an unfortunate reality. We’re suffocating helpful technology while leaving the door wide open for Big Tech to read our minds through our other devices.

As the author of The Battle for Your Brain, I’ve spent years studying how brain-reading technology promises medical miracles but threatens our mental privacy. When California, Colorado, and Montana passed laws to protect “neural data,” I initially celebrated. Finally, someone was taking our cognitive liberty seriously—our fundamental right to keep our thoughts private and our minds free from manipulation.

But after analyzing these laws in detail, I’ve discovered something disturbing. They may accidentally destroy the innovations that could help millions while completely missing the actual threats to our mental privacy.

How We Got Here

The story starts with legitimate fear. Companies like Neuralink are developing brain implants that can decode thoughts. Meta is building wristbands that read nerve signals to control computers. Consumer devices can already detect everything from focus to stress levels. Lawmakers rightfully worried about what happens when employers, insurers, or advertisers get access to our brain data.

So these states created special protections for “neural data”—any information from measuring your brain or nervous system. The intention was noble. Your thoughts deserve at least as much protection as your credit card number.

But here’s where things are going wrong. Thus far, lawmakers have defined neural data so broadly that it includes any measurement from any nerve in your body. Your entire nervous system—which controls everything from your heartbeat to your digestion to every tiny muscle movement—has suddenly fallen under the same legal category as mind-reading.

The Meditation App That Became a Legal Nightmare

Let me introduce you to Sarah, a working mother of two who discovered meditation through an app called Muse. The app comes with a headband that measures basic brainwaves—essentially detecting if she’s tense or relaxed. It helped her manage anxiety without medication.

Under the new laws, this simple device that tells Sarah “you’re calm” or “you’re stressed” now requires the kind of legal protection reserved for the most sensitive data. In Colorado, the company needs her explicit opt-in consent, must conduct expensive “data protection assessments,” and must maintain that consent for ongoing processing. If Sarah revokes her consent or if the company fails to maintain proper consent procedures—perhaps while updating their privacy policy—the device legally must stop working.

In Montana, effective in October 2025, the company can’t share Sarah's brainwave patterns with researchers in certain countries, even if those researchers are working on breakthrough anxiety treatments. The state treats neural data as highly sensitive with stringent deidentification requirements. While properly anonymized data can legally be used for research—the law explicitly allows deidentified data that “cannot be reasonably linked to an identified or identifiable individual”—the requirements are so complex and the liability risks so high that many companies may find it easier to avoid research altogether.

Meanwhile, the meditation app on Sarah’s phone that doesn’t use the headband—but tracks her mood through her typing patterns, analyzes her voice for stress, and monitors her sleep through movement—faces no special regulations at all. The law protects the headband but ignores the phone that knows far more about her mental state.

Thinking Freely with Nita Farahany is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

The Paralyzed Patient Blocked from Treatment

Consider James, a 35-year-old construction worker paralyzed in an accident. Now imagine there is a new brain implant, similar to Neuralink’s technology, that could let him control a computer with his thoughts—allowing him to text his daughter, manage his finances, even potentially control a wheelchair.

These devices improve through global collaboration. Researchers in Japan might discover a better way to decode movement intentions. Scientists in Germany might solve a safety issue. The FDA, our federal medical device regulator, actually requires this kind of data sharing for safety.

But Montana’s law prohibits storing neural data in certain countries. Colorado requires assessments that can take months. California has different rules entirely. A company trying to help James must navigate conflicting state and federal requirements, with lawyers unable to agree on which law takes precedence.

Many companies will look at this legal minefield and make a simple business decision. Focus on other technologies. James waits while lawyers argue about jurisdiction.

The Hidden AI Loophole That Makes Everything Worse

Here’s the most damning failure of these laws. They completely missed how mental privacy is actually violated in 2025. They regulate collecting neural data but say nothing about what artificial intelligence does with it afterward.

The laws define “neural data” very specifically. California’s statute protects “information that is generated by measuring the activity of a consumer’s central or peripheral nervous system.” That’s the raw EEG recording, the direct nerve signal measurement. But herein lies the loophole. When AI analyzes that recording and extracts insights, those insights aren’t necessarily covered by the same protections. After all, they’re not “generated by measuring” nervous system activity—they’re generated by AI analysis of already-collected data.

Picture a company who legally collects your brainwaves from a meditation app with all the proper consents under these new laws. You agreed to share your “relaxation data.” But then their AI analyzes those patterns and discovers they can predict your risk of developing addiction, detect markers suggesting your sexual orientation, identify early signs of Parkinson’s disease, calculate the likelihood of relationship instability, assess your IQ and potential learning disabilities, or even infer your political leanings based on stress responses to certain topics.

Under current law, the company must protect the raw brainwave data. But the AI-derived insight “User has 78% likelihood of addiction” isn’t itself “neural data”—it’s an inference, a prediction, a derivative work. The laws say nothing about protecting or restricting these AI insights.

The company could package up these AI discoveries about your most intimate traits and sell them to insurance companies for risk assessment, employers for hiring decisions, dating apps for compatibility algorithms, or political campaigns for micro-targeting. You consented to share brainwave data for meditation. You didn’t consent to have AI extract and monetize your deepest secrets—but the law doesn’t distinguish between the two.

This is like regulating security cameras but not what someone does with the footage. We’ve protected the raw data while leaving the actual privacy violations—what AI learns about your mind—completely unprotected.

Thanks for reading Thinking Freely with Nita Farahany! Please click here to share this post with others.

Share

The Classroom Technology That Could Help Your Child—But Won’t

Now assume there is a remarkable headband that can detect dyslexia and ADHD by tracking how a child’s brain processes attention while reading. Instead of waiting years for traditional diagnosis, teachers could identify struggling students in minutes and get them help immediately.

Parents absolutely must control their children’s neural data—no question. Strong parental consent is non-negotiable when it comes to children’s brain information. But the current laws make that protection impractical through bureaucratic complexity.

In Montana, schools need express parental consent for collecting neural data, using it for assessment, and sharing it with school professionals. While these consents could be bundled into one comprehensive form, the law’s requirements for “separate express consent” for each distinct use create enough uncertainty that schools, fearing liability, often default to multiple forms to ensure compliance. A 500-student school faces managing complex consent tracking, with any ambiguity in consent potentially meaning legal liability.

California requires systems to let parents opt-out at any time, which sounds reasonable until you realize it means maintaining complex tracking of which children can be helped and which can’t.

The principal looks at the legal complexity, the cost of compliance, the risk of lawsuits, and makes the only sensible choice. Stick with traditional methods. Children who could get early intervention instead struggle for years—not because parents objected, but because the paperwork became unmanageable.

User's avatar

Continue reading this post for free, courtesy of Nita Farahany.

Or purchase a paid subscription.
© 2026 Nita Farahany · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture