When AI Companies Start Admitting the Quiet Part Out Loud
There’s something darkly fascinating about watching the tech industry’s mask slip. DeepSeek, one of China’s AI companies, recently did something quite unusual—they actually acknowledged that AI might, you know, eliminate a lot of jobs. Their CEO even called for a “whistle-blower” system to track job losses. He said he was optimistic about the technology but pessimistic about its impact on society.
Wait, what? Did an AI company executive just admit they’re building something they think will harm society?
The whole thing has this bizarre quality to it, like watching someone simultaneously push the accelerator and warn everyone about the oncoming cliff. I’ve been in IT and DevOps for decades now, and I’ve seen plenty of technological shifts. I’ve watched automation replace manual processes, seen cloud computing transform infrastructure teams, witnessed the gradual erosion of certain roles while new ones emerged. But there’s something different about this AI wave—it’s happening faster, touching more industries simultaneously, and the people building it are starting to say the quiet part out loud.
Someone in the discussion thread nailed it: “We love when AI companies accidentally admit they know exactly how many jobs they’re about to eliminate but keep building anyway because the money is too good to stop.” That’s the crux of it, isn’t it? They know. They’ve always known. The impact assessments exist, the projections are there, but the financial incentives are just too massive to pump the brakes.
What really gets under my skin is the inevitable hand-waving that’ll come when the job losses start piling up. I can already hear the defence: “Well, everyone else was doing it. I’m just one company. How could I be responsible?” It’s the same playbook we’ve seen with climate change, social media’s impact on mental health, and the gig economy’s erosion of worker protections. Break things first, claim ignorance later, and maybe—maybe—generations down the line, someone will look back and say it was all a bit messed up.
But here’s where it gets complicated for me. I’m genuinely excited about AI technology. The capabilities we’re seeing are remarkable. The potential applications in healthcare, scientific research, accessibility—these are real and meaningful. I use AI tools in my work, and they’ve made certain tasks dramatically easier. I’m not a Luddite standing in the way of progress.
Yet I find myself thinking about what comes next. Australia’s economy isn’t immune to these shifts. Melbourne’s got a thriving tech sector, but we’ve also got call centres, administrative roles, content creation jobs—all in the firing line. When I think about my daughter’s generation entering the workforce, what exactly are they entering into? What happens when the entry-level jobs that used to be stepping stones to careers simply don’t exist anymore?
The discussion about capitalism’s role in all this is worth unpacking too. Someone argued that making a handful of people rich at the cost of billions is literally the point of capitalism. Another person pushed back, saying capitalism is supposed to benefit everyone. A third called that naive.
They’re all sort of right, aren’t they? Capitalism has lifted billions out of poverty, created unprecedented prosperity, and driven innovation. It’s also concentrated wealth at levels we haven’t seen since the Gilded Age, externalised environmental costs, and shown a remarkable ability to ignore human suffering when quarterly earnings are at stake. The system works brilliantly for efficiency and innovation; it’s rubbish at ensuring the gains are distributed fairly or that negative externalities are properly addressed.
What frustrates me most is that we could actually do something about this. We could implement universal basic income trials. We could reform tax structures to ensure AI-driven productivity gains benefit society broadly, not just shareholders. We could invest heavily in retraining programs before the displacement happens, not after. We could treat this as a societal transition requiring planning and support, rather than a natural disaster we just have to weather.
But that would require political will, coordinated action, and probably some level of international cooperation. Instead, we’ll likely stumble through this transition the way we’ve stumbled through every other major technological shift—reactive, chaotic, with the most vulnerable bearing the brunt of the disruption while the benefits flow upward.
DeepSeek calling for an AI job loss whistle-blower system is, I suppose, a tiny step toward acknowledging reality. But acknowledgement without action is just performance. It’s corporate theatre designed to look responsible while changing nothing fundamental.
Still, I want to end on something approaching hope, because despair isn’t useful. The fact that we’re having these conversations matters. The fact that even AI company executives are admitting there’s a problem suggests the issue is too big to ignore. We’ve faced massive economic transitions before—the Industrial Revolution, the shift from agricultural to manufacturing economies, the digital revolution. Humanity has adapted, though never without pain and never without having to fight for labour protections and social safety nets.
The question is whether we can learn from history and be more proactive this time. Whether we can build the support structures before they’re desperately needed rather than after. Whether we can have an honest conversation about what kind of society we want AI to help create, rather than just accepting whatever emerges from pure market forces.
I’m not holding my breath, but I am paying attention. And I think you should too.