When Apps Override Birth Certificates: The Slippery Slope of Surveillance State
I’ve been reading about this new ICE facial recognition app called Mobile Fortify, and honestly, it’s keeping me up at night. Not in the “oh that’s mildly concerning” way, but in the “this is genuinely terrifying and we’ve crossed a line we can’t uncross” way.
The headline itself is bad enough - mandatory facial scans, 15 years of data retention regardless of citizenship status. But it’s the detail buried in the reporting that really got me: ICE officials can apparently treat a biometric match from this app as “definitive” and ignore actual evidence of American citizenship, including birth certificates.
Let me say that again. Your birth certificate - that legal document that has been the foundational proof of citizenship for generations - can now be overruled by an app. A bloody app.
Think about that for a moment. We’re living in a world where software, developed by a private company, with all the inherent biases and technical limitations that entails, now has more legal authority than notarised government documents. Computer says no, get in the van. It’s like that old Little Britain sketch, except instead of being denied a bank transaction, you’re being denied your fundamental rights as a citizen.
The technical problems with this are staggering. Facial recognition technology is notoriously unreliable, particularly when it comes to people with darker skin tones. This isn’t some conspiracy theory - it’s well-documented technical reality. Darker skin reflects less light, making it harder for these systems to identify distinguishing features accurately. So we’ve essentially built a system with discrimination baked right into its technical architecture. Whether that’s intentional or not almost doesn’t matter at this point - the effect is the same.
And then there’s the straightforward unreliability of the technology itself. Someone mentioned that facial recognition can be fooled by a printed picture. Others pointed out that a bad hair day, new glasses, a dye job, or even a bruise could throw off the results. Yet somehow this fallible technology is being given precedence over legal documents?
The Fourth Amendment implications here are enormous. Sure, taking someone’s photo in public has long been considered legal, but plugging that into a massive database for identification purposes? That’s a different ballgame entirely. We’re watching the definition of “reasonable search” being stretched beyond recognition, and honestly, with the current Supreme Court, I’m not optimistic about how they’d rule if this gets challenged.
What really gets under my skin is how we got here. This didn’t happen overnight. The infrastructure for this kind of surveillance state has been building for decades - the Patriot Act, the creation of the Department of Homeland Security, the normalisation of mass data collection. We’ve been in the “fuck around” phase for years while privacy advocates shouted warnings that most people dismissed as paranoid. Now we’re firmly in the “find out” phase, and suddenly it’s not so theoretical anymore.
Working in IT and DevOps, I’ve seen firsthand how systems can fail, how bugs can persist, how biases in training data can produce wildly skewed results. The idea that law enforcement can use these systems to override fundamental legal documents terrifies me on a professional level, let alone a civil liberties one. I’ve debugged enough code to know that “the computer says so” is never, ever a good enough reason to make decisions that affect people’s lives.
And here’s the kicker - where exactly do they deport natural-born US citizens who get flagged by this system? If you were born in Melbourne, Florida, have lived your whole life in the States, and this app decides you’re not a citizen, where do you go? Do they just pick a country that “looks right” based on your appearance? It’s absurd on its face, which suggests the real purpose here isn’t about accurate immigration enforcement at all.
The data retention aspect bothers me too. Fifteen years of biometric data on everyone, citizen or not. That’s a treasure trove for any bad actor who might gain access - foreign governments, hackers, or future administrations with even worse intentions than the current mob. We’re building the infrastructure for oppression and trusting that it’ll never be misused. History suggests that’s monumentally naive.
What can we do about this? Well, practically speaking, masks are looking pretty good right about now - though I suspect we’ll see laws against “concealing your identity” pop up faster than you can say “authoritarian creep.” More substantively, we need to be voting in every election - and I mean every election. State and local races matter enormously here, arguably more than federal ones in some ways. We need to be supporting organisations like the ACLU who are fighting these battles in court. We need to be loud about this, making it politically costly to support this kind of surveillance state expansion.
And maybe, just maybe, we need to start having serious conversations about dismantling some of these systems entirely. Not every problem technology can solve should be solved by technology. Especially when the “solution” involves giving an app more authority than foundational legal documents.
The irony isn’t lost on me that I’m typing this on a device that’s probably contributing to surveillance in its own ways. We’re all complicit to some degree in building the infrastructure that’s now being weaponised against civil liberties. But recognising that doesn’t mean accepting what’s happening now.
This isn’t the future I want for my daughter. It’s not the future anyone should want. And if we don’t push back hard on this - through legal challenges, through voting, through making this politically radioactive - it’s only going to get worse from here.
Time to get loud about this, Melbourne. Time to get loud about this, Australia. Because if you think this kind of authoritarian surveillance tech won’t make its way here eventually, you haven’t been paying attention.