Your Face to Use an AI? The Creeping Surveillance of Big Tech
Something’s been nagging at me this week. Word is spreading that Anthropic — the company behind Claude — is starting to require identity verification for users. Not just a credit card or an email address. We’re talking government-issued ID and facial recognition scans.
Let that sink in for a moment. A facial recognition scan. To use a chatbot.
I’ve been following the online discussion around this, and the reactions range from darkly amused to genuinely alarmed. Someone in one thread summed it up perfectly: “You now need to submit your passport and a DNA sample for every website or app. How the fuck did we reach this point?” A bit hyperbolic, sure, but the sentiment is completely understandable. There’s a very real sense that the walls are closing in on ordinary people who just want to use technology without handing over their entire identity.
The motivations behind this move are murky, and honestly, I don’t think there’s a single clean answer. Some people reckon it’s part of the broader geopolitical chess match between US and Chinese AI labs — a way to ensure competitors can’t access training data or capabilities through the API. Others are more cynical, pointing out that biometric data plus detailed conversation history is an extraordinarily valuable dataset. One comment that stuck with me suggested it might be about tying chat history to a real, identifiable person — useful not just for advertisers, but potentially for law enforcement too. Think about how Google search history already routinely turns up in criminal investigations.
And then there’s the “think of the children” angle — the age verification justification. Look, I’m a parent. My daughter is a teenager and I’m not naive about the kinds of things teenagers get up to online. But I’ve always been deeply sceptical of child safety being used as a trojan horse for mass data collection and surveillance infrastructure. Once that system exists, it doesn’t stay narrowly scoped for long. It never does.
What frustrates me most is the asymmetry of it all. Anthropic positions itself as this deeply ethical, safety-conscious company — the “effective altruist” crowd, as some have characterised them — yet the solution to safety concerns is apparently to hoover up your biometric data. That’s not safety. That’s control dressed up in the language of responsibility.
Here in Australia, we’ve had our own version of these debates. The Online Safety Act, age verification proposals, the never-ending tug of war between privacy advocates and regulators. We’re not immune to this creep. And every time I see another layer of identity surveillance get normalised overseas, I know it’s only a matter of time before someone in Canberra starts taking notes.
The thing is, there’s a genuinely good alternative sitting right there, and the Claude situation is accelerating people towards it: running models locally. Open-source models have come an enormous way in the past couple of years. You can run surprisingly capable models on a decent machine, with no one watching, no passport scan required, no facial recognition ritual. The conversation stays on your hardware. It’s yours.
I’ve been tinkering more with local models lately, and while they’re not quite at the level of the frontier models for complex tasks, the gap is narrowing fast. And for a lot of everyday use cases — drafting, summarising, coding assistance — they’re more than adequate. The trade-off of slightly reduced capability in exchange for not submitting to what one commenter brilliantly called a “humiliation ritual” seems increasingly worth it.
The irony is that heavy-handed verification requirements might do more to drive users toward less regulated, less transparent options than to achieve any meaningful safety outcome. If your response to people wanting privacy is to demand their face, don’t be surprised when they find a door you can’t lock.
I’m not ready to declare the dystopia fully arrived just yet. But I’m paying attention. And I’d encourage you to think carefully about what you’re trading away the next time an app asks for just a little more of you than feels comfortable.