While We Argue About AI Art, Robots Are Already Pulling Triggers
I’ve been thinking a lot about priorities lately. You know that feeling when you’re scrolling through endless debates about ChatGPT writing essays or AI-generated Instagram ads, while somewhere in the back of your mind, there’s this nagging sense that we’re missing something far more urgent? Well, turns out that nagging feeling might be onto something.
Someone recently brought up Israel’s Lavender and Gospel systems - AI-powered tools that can identify targets from CCTV footage and autonomously coordinate drone strikes with minimal human oversight. The casual way this was mentioned, almost as an afterthought while discussing Model UN research, really struck me. Here’s a technology that represents one of the most significant shifts in warfare since the invention of gunpowder, and it’s being discussed like it’s yesterday’s news.
What’s particularly chilling is the quote from an Israeli army official describing their role: “I would invest 20 seconds for each target at this stage, and do dozens of them every day. I had zero added-value as a human, apart from being a stamp of approval.” Twenty seconds to rubber-stamp a death sentence. That’s barely enough time to read a tweet, let alone make a life-or-death decision.
The person who raised this topic made an excellent point about our misplaced priorities. We’re having heated debates about Sydney Sweeney commercials and whether AI-generated art is “real art,” while systems that can kill without meaningful human intervention are already operational. It’s like arguing about the colour of deck chairs while the ship is taking on water.
What really gets to me is the scoring system reportedly used by Lavender - apparently assigning points to people based on various factors. Two points for being related to a Hamas member, two points for being seen near a certain location, and so on. Once you hit a certain threshold, you’re automatically marked for elimination. It sounds like something from a dystopian novel, except it’s happening right now.
Working in IT, I know how these systems operate. I’ve seen enough bugs, edge cases, and unexpected behaviours to know that any automated system is fallible. The difference is, when my deployment scripts fail, nobody dies. When a facial recognition algorithm misidentifies someone or when training data is corrupted, the consequences are irreversible.
There’s also the terrifying precedent this sets. Once this technology exists, it will spread. Other nations, non-state actors, even domestic surveillance programs - they’ll all want their own versions. We’re essentially watching the birth of a new category of weapon, one that could fundamentally change the nature of conflict and oppression worldwide.
The comment threads around this topic revealed something equally disturbing - the sheer apathy and resignation. People suggesting that civilian casualties are inevitable in war, that this technology might even reduce them, or that it’s no different from having humans memorise faces and operate drones. These responses miss the fundamental issue: we’re removing human judgment, conscience, and the possibility of mercy from life-and-death decisions.
What frustrates me most is how we’ve allowed ourselves to be distracted by relatively trivial AI concerns while the most serious applications slip by unnoticed. Yes, AI-generated content raises important questions about creativity, authenticity, and economic displacement. But these pale in comparison to the implications of autonomous killing machines.
Perhaps the silence around military AI applications isn’t accidental. It’s much easier to get people worked up about artists losing commissions than to confront the reality that we’re sleepwalking into a world where algorithms decide who lives and dies. The former feels manageable, something we can regulate or boycott. The latter feels overwhelming, beyond our control.
But here’s the thing - it’s not beyond our control, at least not yet. We still have time to demand transparency, to push for international treaties governing autonomous weapons, to insist on meaningful human oversight in life-and-death decisions. The technology exists, but the frameworks for controlling it are still being written.
The next time someone complains about AI ruining art or replacing writers, maybe we should redirect that energy toward the systems that are already replacing human judgment in matters of life and death. Because while we’re arguing about the authenticity of generated images, real people are dying based on algorithmic decisions made in milliseconds.
We need to get our priorities straight before it’s too late.