The AI Savior Complex: Wrestling with Our Technological Future
Looking through various online discussions lately, there’s been a disturbing yet understandable trend emerging: people actively hoping for an uncontrolled artificial superintelligence (ASI) to save us from ourselves. The sentiment reminds me of sitting in my favourite Carlton café, overhearing conversations about the latest political developments while doomscrolling through increasingly concerning headlines.
The logic seems straightforward enough - we’ve made a proper mess of things, so why not roll the dice on a superintelligent entity taking the reins? Recent political developments, particularly in the US, have only amplified these feelings of desperation. Walking past the State Library yesterday, I noticed a group of young protesters with signs about climate change, and it struck me how their generation might view ASI as their last hope for a liveable future.
But here’s where it gets complicated. While the idea of a benevolent superintelligent overseer might sound appealing when we’re frustrated with human leadership, it’s worth remembering that we’re essentially talking about creating something with godlike powers and crossing our fingers that it turns out well. The implications are staggering - imagine Federation Square being transformed overnight into something we can’t even comprehend, because an ASI decided it served a better purpose.
The corporate angle is particularly concerning. Remember how the NBN rollout went? Now imagine that level of corporate and political interference, but with something infinitely more powerful than internet infrastructure. The thought of an ASI being controlled by profit-driven entities or power-hungry governments should send shivers down anyone’s spine.
The most intriguing discussions centre around the question of control itself. We’re essentially debating whether to create something far more intelligent than ourselves, while simultaneously trying to ensure it remains subservient to us. Picture training a border collie and then expecting it to take orders from an ant - the power differential would be far more extreme than that.
Looking at my little one playing with their iPad, I wonder what kind of world we’re stepping into. Will we be creating our successors, or our executioners? Or perhaps something entirely different that our limited human minds can’t even conceive?
The truth is, we’re not going to stop AI development. That train has well and truly left Flinders Street Station. But perhaps we need to be more thoughtful about how we approach it. Rather than hoping for an uncontrolled ASI to save us from ourselves, maybe we should focus on creating systems that enhance and complement human decision-making while maintaining meaningful human oversight.
The coffee’s gone cold while writing this, and the café’s getting busy with the lunchtime crowd. Sometimes I think we’re all just trying our best to navigate this rapidly changing world, hoping someone - or something - will show us the way forward. But maybe the answer isn’t in abandoning our responsibility to an artificial deity, but in using these powerful tools to become better versions of ourselves.
Though I have to admit, watching the news lately, I understand why some folks are ready to hand over the keys to the kingdom to our potential AI overlords. Let’s just hope we’re still around to write blog posts about how it all turned out.