When the Kids Running the Future Act Like, Well, Kids
The internet has been buzzing with yet another Twitter spat between tech titans, and frankly, it’s left me feeling like I’m watching a playground fight between kids who happen to control technologies that could reshape humanity. The whole thing started with what appears to be Elon Musk taking shots at Sam Altman over some AI development drama, and honestly, watching these two go at it publicly has been equal parts fascinating and deeply concerning.
What struck me most about this whole kerfuffle wasn’t the specific jabs or the technical arguments they were making. It was the collective response from observers who seem to oscillate between treating this like entertainment and genuine horror at the realisation that these are the people steering our technological future. Someone in the discussion hit the nail on the head when they said these are “the kids building ASI” – artificial superintelligence – and the implication is terrifying when you think about it.
The comment that really got under my skin was from someone who pointed out how normalised this behaviour has become. They mentioned how there used to be a “respectful base layer” in public discourse, especially from people in positions of power. Remember when Howard Dean’s enthusiastic “Yeah!” essentially ended his political career? Now we have billionaire tech CEOs throwing public tantrums on social media platforms, and it’s just Tuesday.
What’s particularly frustrating is how this reflects a broader cultural shift where consequences seem to evaporate at certain altitude of wealth and influence. It reminds me of conversations I’ve had with colleagues here in Melbourne’s tech scene – we’re all working on systems and platforms that these companies will eventually dominate or replace, yet the people making the big decisions seem to lack the emotional maturity you’d expect from a team lead, let alone someone controlling billion-dollar AI research programs.
The psychological analysis floating around the discussion was perhaps the most insightful part. One user made an excellent point about how these aren’t some mystical übermenschen – they’re humans with “many of our worst impulses cranked up due to being above consequence.” The comparison to Versailles aristocrats was particularly apt. We like to imagine that people with immense power are somehow beyond petty squabbles, but history shows us they’re often more petty than the rest of us, not less.
This connects to something that’s been bothering me about the AI development landscape more broadly. We’re rushing toward technologies that could fundamentally alter human society, yet the discourse around them is increasingly dominated by ego battles and corporate feuds. When I think about the environmental impact of training these massive models, the potential job displacement, the questions around data privacy and consent – all of these critical issues get overshadowed by tech bro drama.
The irony isn’t lost on me that we’re potentially handing over significant aspects of our future to artificial intelligence systems created by people who can’t even handle a disagreement without descending into public mudslinging. There’s something deeply unsettling about the prospect of AI systems trained on data that includes the very Twitter feeds where their creators are having these meltdowns.
Perhaps the most sobering observation came from someone who mentioned climbing corporate ladders only to discover that “decisions are just barely one step above chaos.” That resonates with my experience in IT and DevOps – the higher up you go, the more you realise how much of what looks like grand strategy is actually just educated guesswork and personality conflicts.
But here’s what gives me hope: the reaction itself. The fact that people are calling this out, that there’s genuine concern about the behaviour of these leaders, suggests we haven’t completely lost our ability to recognise when something’s wrong. The discussion showed a healthy scepticism about treating these figures as infallible geniuses just because they’ve accumulated wealth and influence.
Maybe what we need is more accountability mechanisms for people in positions of technological power. Not just regulatory oversight of their companies, but social expectations that come with real consequences for behaviour that undermines public trust. After all, if we’re going to let these companies shape our technological future, perhaps we should expect their leaders to demonstrate the kind of emotional intelligence and public responsibility that the role demands.
The future is too important to leave to people who treat it like a Twitter flame war.