Chat Control 2.0: When 'Protecting the Children' Becomes a Surveillance Blank Cheque
There’s a study doing the rounds this week claiming that six out of ten Europeans support Chat Control 2.0 because they believe it will improve online safety. My first reaction, honestly, was a hollow laugh into my batch brew. Six out of ten. Right.
Now, I’m not going to sit here and call everyone who supports this proposal an idiot — that’s lazy thinking and it doesn’t help anyone. But I do think a lot of those six out of ten would have a very different opinion if the survey question wasn’t framed as something like “do you want children to be safe online?” Because of course you do. Everyone does. That’s not even a real question.
The real question — the one that apparently only 21.6% of respondents felt they had enough information to answer — is whether Chat Control 2.0 is actually a workable, proportionate, or even effective way to achieve that goal. And when you start pulling at those threads, the whole thing unravels pretty quickly.
For those not familiar, Chat Control 2.0 is a proposed EU regulation that would essentially require platforms to scan private, encrypted communications for illegal content, particularly child sexual abuse material (CSAM). Sounds reasonable on the surface, right? Nobody wants CSAM circulating online. But here’s the thing: to do this kind of scanning, you fundamentally have to break end-to-end encryption. You’re not just peeking in on the bad guys — you’re building a surveillance infrastructure that touches everyone’s private messages. Every text. Every photo. Every conversation you have with your partner, your kids, your doctor, your lawyer.
The AI models being proposed to do this scanning are, by the way, nowhere near accurate enough. We’re talking about systems that would generate enormous numbers of false positives — innocent people getting flagged, potentially investigated, their private lives scrutinised by automated systems that don’t understand context. Anyone who’s worked in tech knows how badly these things can go wrong at scale. It’s not theoretical. It’s a practical certainty.
Working in IT, this kind of proposal makes my blood pressure tick up noticeably. I’ve spent years thinking about security architecture, about why encryption matters, about why backdoors — even well-intentioned ones — are a gift to every malicious actor on the planet. There is no such thing as a backdoor that only the good guys can use. Once you weaken the lock, you weaken it for everyone.
What really gets under my skin, though, is the manufactured consent angle. The framing of these surveys, the language in the proposal itself — “it will improve online safety” stated as settled fact rather than contested claim — it all feels deeply manipulative. We saw something similar here in Australia with the recent social media age verification debates. The question gets boiled down to “do you support protecting kids?” and suddenly you’ve got a majority in favour of something they don’t fully understand, whose implementation details they’ve never been shown.
My teenage daughter is the reason I actually care about this stuff at a gut level, not just a professional one. She’s growing up in a world where her digital life is as real and important to her as anything happening offline. The idea that her private communications — her messages to friends, her venting sessions, her completely normal teenage existence — should be subject to automated scanning by some government-mandated system is genuinely alarming to me. Not because she has anything to hide. Because privacy isn’t about hiding things. It’s about having a space that’s yours.
The most constructive thing any of us can do right now is actually talk to people about this. Not condescend, not lecture, not call them cattle or idiots — just explain what the proposal actually does, in plain language, without the spin. One conversation at a time. A lot of people, once they understand the mechanics rather than just the marketing, come around pretty quickly. The surveillance infrastructure being proposed here isn’t a scalpel targeting criminals. It’s a dragnet. And once it’s built, it doesn’t get unbuilt.
The good news is that this isn’t over yet. There are organisations and political parties actively fighting Chat Control — the Pirate Parties in Europe have been particularly vocal and principled on this issue. The opposition is real and it’s growing. The statistic that one in five Europeans is willing to actively protest this regulation is, in its own way, significant — that’s a lot of people for something most of the public still doesn’t fully understand.
So yeah, six out of ten might currently support this — framed the right way, with the details buried. But informed consent looks very different. And that’s worth fighting for.