Not long ago, Meta quietly updated its terms of service. The new language said that public posts, photos, and other user content could now be used to train its AI models. You might have missed it. Most people did.
There was no big announcement. Just a notice buried behind a link, tucked inside a larger rollout. One more checkbox. One more “learn more.” One more line in a privacy policy you were never going to read.
And just like that, millions of people were signed up to train Meta’s AI—whether they realized it or not.
This is how consent works now. Not as a clear yes or no, but as a creeping default. Platforms like Google, Meta, and OpenAI quietly roll out updates, soften the language, and bury the changes under a few clicks. They count on fatigue. They count on trust. Most people don’t notice—and that’s by design.
The changes are often legal. But that’s not the same as ethical.
In theory, we “agree” by continuing to use the service. But in practice, there’s no real choice. You can’t meaningfully say no when the only option is to leave the digital world behind.
That’s not consent—that’s coercion with a polite face.
Just this month, Amazon announced it’s sunsetting one of Alexa’s core privacy features: the ability to opt out of having your voice recordings reviewed by humans. That feature was only added after public backlash years ago—now it’s being quietly removed. No apology, no meaningful explanation. Just a blog post and a deadline: March 28. After that, unless you mute the mic or unplug the device, you’re back in the training pool.
This hits harder for anyone whose home is fully wired through Alexa. Smart lights. Smart locks. Thermostat. Security system. Routines. Voice control over daily life. At that point, Alexa isn’t just a gadget—it’s the operating system of your home. Unplugging it isn’t simple. And Amazon knows that.
For those users, “just opt out” isn’t a choice. It’s a non-option framed as freedom.
Once again, the decision isn’t really yours. You’re not being asked—you’re being told.
What used to be a moment of permission has become a performance. Consent has become theater.
Here’s the thing: AI models need a lot of data. The big ones—the kind being built by OpenAI, Google, Meta—they run on scale. The more information they can pull in, the better they perform. So companies go out and gather whatever they can get their hands on.
Sometimes that means public websites. Sometimes it’s user-generated content scraped from forums, blogs, and social media. Other times it’s pulled from platforms that once promised they’d never use your data that way. It doesn’t really matter how it gets there. Once it’s in the training pipeline, it stays there—and good luck figuring out what piece of your work ended up shaping a model’s output.
That’s where things start to get blurry. A line from your essay, a sketch you shared with a friend, a photo you stored in the cloud—suddenly, it’s all part of the learning process for tools you never agreed to help build. Not because you opted in, but because no one gave you a real way to opt out.
Adobe became a flashpoint for this tension back in 2024. A quiet update to its terms of service suggested the company could access and use user content—including files stored in Creative Cloud—for purposes related to AI development. That includes design work, photography, video, even client projects. Understandably, users panicked. Creators across the internet called it what it felt like: surveillance, wrapped in corporate language.
But the implications ran deeper than frustration. A lot of professionals use Adobe tools to handle work covered by NDAs—unreleased marketing campaigns, legal documents, private client assets, internal corporate communications. If Adobe had moved forward without offering a clear opt-out, that kind of content could’ve been exposed to internal review or swept into training systems. It wasn’t just a privacy issue—it was a legal and ethical risk, especially for people whose livelihoods depend on confidentiality.
And users weren’t given much say. There was no granular control, no file-by-file setting. Just vague terms and default permissions. If you wanted to keep using Adobe tools, you had to accept the possibility that your content might be used behind the scenes in ways you’d never approve of.
After the backlash, Adobe promised revised terms and more transparency. But the trust had already taken a hit. And the timing wasn’t subtle—the change came just as Adobe was pushing its Firefly AI tools deeper into the Creative Cloud suite.
It’s a pattern: push as far as you can, wait for people to notice, then pull back just enough to quiet the noise. But the direction of travel doesn’t really change. And in the meantime, more and more content slips into the training pool.
This isn’t just a technical issue. It’s a legal and societal one. In a system built on common law, precedent matters. When companies normalize extracting private data without real consent, they’re not just shifting user expectations—they’re quietly influencing what courts, lawmakers, and society come to see as acceptable. The erosion of privacy in tech doesn’t stay in tech. It sets the stage for erosion everywhere.
Modern platforms still go out of their way to offer users a sense of agency. You can adjust privacy settings, limit ad personalization, hide your profile from public view, or choose who sees a post. At first glance, it feels like autonomy. There’s a dashboard, a few toggles, maybe even a notification reminding you that you’re in control. But when you look closer, it becomes obvious that these settings don’t govern whether your data is collected, used, or monetized—they only shape how that data is presented back to you.
This is one of the more insidious developments in how privacy works today. Platforms have learned that the appearance of choice is often enough to quiet concern. So they offer users control over the aesthetics of data use—the cosmetics, not the core. You can change the lighting in the room, but the house still belongs to someone else. A company like Meta may let you turn off “personalized ads,” but that doesn’t mean they stop collecting behavioral data—it just means they won’t show you the results directly. Likewise, Google’s account controls allow you to pause certain kinds of tracking, but they don’t give you a say in how your historical data has already been processed, analyzed, or sold.
This is where the illusion becomes most clear: the truly consequential decisions—the ones about AI training, long-term storage, third-party access, and corporate use—are made unilaterally. They’re buried in terms of service you can’t negotiate, and updated in ways you’ll likely never notice. You’re not invited to the table where those decisions are made. You’re handed a menu and told it’s the same thing.
Control, in this environment, has been redefined. It no longer means sovereignty over your information—it means the ability to customize your experience within boundaries you didn’t set. And over time, that normalization has changed how people understand what privacy even is. We’ve been trained to see privacy as a set of interface options, not as a right with legal and ethical weight.
The danger is that as these systems become more complex and more opaque, the illusion of control becomes harder to separate from the real thing. Users feel empowered because they clicked a few buttons. But those buttons don’t touch the infrastructure. They don’t stop the collection. They don’t undo what’s already been done. They just offer a way to stay comfortable inside the system—without ever challenging the system itself.
The unsettling truth about modern platforms isn’t just that control is hollow—it’s that escape is largely impossible. We still like to believe we can opt out, that we can choose not to participate in systems we don’t trust. But that idea is quickly becoming a fiction. You can delete an app, deactivate an account, stop posting entirely—but your data doesn’t disappear with you. The systems you once interacted with continue on, and the traces of your presence remain.
Even when companies offer a way to opt out, it usually applies only to what comes next. What’s already been scraped, collected, and processed is almost never returned. Your work, your likeness, your patterns—all of it may already live inside systems designed to grow more powerful without ever asking permission again. And once that happens, the question isn’t whether you participate. You already did. The only question left is how much influence that participation will continue to have.
That might be the hardest part to face: we’re not just being used—we’re being absorbed. Folded into a feedback loop we can’t fully see, let alone stop. Our inputs—conversations, images, behaviors—are helping shape AI systems that will increasingly shape everything else: hiring, education, policing, healthcare, politics, media, law. These systems are being trained on us, and in turn, they’re being used to remake the world around us. A world that was supposed to be built for people. A world that now seems increasingly built for scale, for automation, for optimization—on terms we never agreed to.
We’re not just feeding models—we’re feeding momentum. Decisions about how AI will function, who it will serve, what values it encodes—those decisions are being made by a handful of private actors, often behind closed doors, often after the systems are already deployed. And yet, we’re all participating. Every day. Every time we click “accept.” Every time we create something online. Every time we’re told it’s too late to opt out.
What’s being reshaped here isn’t just our relationship to technology—it’s our relationship to power, to public life, to what it means to live in a society where systems are supposed to be accountable to the people they affect. And it’s all happening without a vote, without a pause, without a moment to ask what kind of future we actually want.
We didn’t consent to this. But we’re in it now—feeding a machine that’s rewriting the rules of society in real time, without our input, and locking us into a future we didn’t choose.
There’s no simple fix for this. The systems already exist. The data’s already been taken. The incentives are locked in. But that doesn’t mean we stay quiet.
We live in the aftermath of choices we didn’t get to make—but that doesn’t mean we stop making choices now. We can push back. We can pressure lawmakers, demand accountability, build alternatives, and refuse to normalize what should never have been acceptable. And we can stop pretending that being resigned is the same thing as being realistic.
This isn’t just about privacy. It’s about power—and how much of it we’ve already lost. The rise of AI has only made visible what was already happening beneath the surface: institutions shifting further away from public control, decisions being made without us, rights eroding not through force, but through quiet normalization.
The fight for digital rights is part of something larger. It’s about reclaiming a voice—not just in technology, but in how our governments function, how our institutions serve us, and whether democracy is still something we actively shape or something we just say we live under.
We won’t get a clean reset. But we still get a say.
And that begins by refusing to look away.
This isn’t the future most people imagined when we talked about innovation. The tools were supposed to serve us—not study us. The systems were supposed to be accountable—not invisible. But somewhere along the way, the boundaries slipped. And now, we’re living in the blur.
We’ve been told this is progress. That the benefits will outweigh the discomfort. That privacy is just a trade-off. But who agreed to the terms? Who drew the line between participation and exploitation? Between use and ownership? Between being online—and being mined?
We didn’t vote on this. We didn’t opt in. We just showed up. And that was enough.
Maybe it was never just about data.
Maybe it was about control—who has it, who loses it, and how quietly that shift can happen.
Maybe it was about erasing the line between being part of a system—and being consumed by it.