The Normalisation of Surveillance: Why Meta's Smart Glasses Should Terrify Us All
I’ve been following the discussion around Meta’s Ray-Ban smart glasses with growing unease, and frankly, I’m baffled by how casually we’re all accepting what amounts to a massive expansion of surveillance technology into our daily lives. While tech reviewers gush about the convenience and cool factor, we’re sleepwalking into a world where privacy becomes even more of a distant memory.
The fundamental issue here isn’t about the person wearing these glasses - it’s about everyone else around them. This represents a complete shift from the usual “don’t like it, don’t buy it” consumer choice argument. When someone walks into a café on Collins Street wearing these things, everyone in that space becomes a potential data point for Meta’s algorithms, whether they consented to it or not.
What particularly frustrates me is how the discussion gets derailed by technical arguments about LED indicators and firmware protections. Sure, Meta claims there are safeguards - a little light that supposedly indicates when recording is happening. But anyone who’s worked in tech knows that software and firmware can be modified, updated, or simply bypassed. The real concern isn’t just the amateur creep trying to disable an LED; it’s the systematic normalisation of ambient surveillance by one of the world’s most powerful data collection companies.
The parallels with Google Glass are interesting, but there’s a crucial difference this time around. Google Glass looked obviously techy and drew attention - people knew they were dealing with a camera. These Ray-Ban glasses blend into everyday fashion, making the surveillance invisible and socially acceptable. That’s not an accident; it’s by design.
What really gets under my skin is the power imbalance this creates. Meta gets to hoover up data from people who never agreed to be part of their ecosystem, all while hiding behind the argument that there’s “no expectation of privacy in public.” That might be legally true in some jurisdictions, but it completely misses the ethical point. Just because something is legal doesn’t make it right, and just because we can build this technology doesn’t mean we should deploy it without serious public debate.
I keep thinking about how this technology could be used in more sensitive spaces. What happens when someone wearing these glasses walks into a medical clinic, a domestic violence shelter, or even just a private conversation in a supposedly public park? The potential for abuse extends far beyond individual privacy violations - we’re talking about a fundamental change in how society functions when everyone might be a walking surveillance device.
The fact that many sensible privacy concerns in online discussions are getting heavily downvoted raises its own red flags. When legitimate questions about surveillance technology face coordinated pushback, it suggests there are interests at play beyond organic public discourse. Whether that’s astroturfing, bot activity, or just overzealous tech enthusiasts, it’s concerning that we can’t have an honest conversation about these implications.
The truth is, we’re already deep into a privacy crisis that most people either don’t understand or have given up fighting. But smart glasses represent a new frontier - moving from tracking what we do online to capturing everything we see and hear in the physical world. Combined with advancing AI capabilities, this isn’t just about recording moments; it’s about building comprehensive profiles of how people move, speak, and interact in spaces they thought were semi-private.
What gives me some hope is that we’ve pushed back against this kind of overreach before. Google Glass faced significant resistance, and many venues banned them outright. We need that same energy now, but we need it to be more sustained and better organised. This isn’t about being anti-technology; it’s about ensuring that technology serves humanity rather than subjecting us to ever-greater levels of surveillance.
The responsibility here shouldn’t fall solely on individuals to protect themselves. We need stronger regulatory frameworks that actually have teeth, public spaces that clearly prohibit recording devices, and technology companies that prioritise consent and privacy by design. Until then, we’re all subjects in Meta’s grand experiment, whether we signed up for it or not.