When AI Meets Government: The Perils of Algorithmic Deregulation
The news that Doge is reportedly using AI to create a ‘delete list’ of federal regulations has been rattling around in my head for days now. It’s one of those stories that perfectly captures the bizarre intersection of cutting-edge technology and political ideology that seems to define our current moment.
On the surface, there’s something seductive about the idea. Anyone who’s worked in tech knows the frustration of bureaucratic bloat - those endless forms, redundant processes, and regulations that seem to exist purely to justify someone’s job. The promise of AI cutting through decades of accumulated red tape sounds almost utopian. Just feed the machine learning algorithm thousands of regulations, let it identify the redundant ones, and voilà - streamlined government.
But the more I think about it, the more unsettled I become. There’s something deeply unsettling about delegating decisions that affect millions of lives to an algorithm, especially when that algorithm is being deployed by people with a very specific ideological agenda.
Someone in the discussion quoted Joseph Tainter’s work on societal collapse, suggesting we’re in the “ripping the pipes out of the walls for copper scrap” stage of empire. That metaphor hits hard. Regulations aren’t just bureaucratic noise - they’re often written in blood, literally. Worker safety standards exist because people died in factories. Environmental protections exist because companies poisoned rivers and communities. Food safety regulations exist because people got sick and died from contaminated products.
The terrifying thing about using AI for this kind of wholesale deregulation is the potential for what we might call “algorithmic blindness.” Machine learning models are notoriously bad at understanding context, nuance, and unintended consequences. They can identify patterns in data, but they can’t understand why a particular regulation exists or what disasters it prevents. An AI might flag occupational health and safety requirements as “inefficient” without understanding that those requirements prevent workplace deaths.
What really gets under my skin is the plausible deniability aspect that another commenter raised. When things go wrong - and they will go wrong - politicians can simply point to the AI and say, “Well, the algorithm recommended it.” It’s the perfect shield against accountability. Corporate responsibility becomes algorithmic responsibility, which is to say, no responsibility at all.
Living through the pandemic here in Melbourne, we saw firsthand how quickly things can go sideways when proper oversight is abandoned. Remember the hotel quarantine debacle? That was human error and institutional failure on a relatively small scale. Now imagine that kind of systemic breakdown, but amplified across entire regulatory frameworks and backed by the supposed objectivity of AI.
The legal perspective someone shared about “Rules as Code” is fascinating and, in the right context, potentially valuable. There’s definitely merit in making regulations more accessible and understandable through technology. But there’s a world of difference between using AI to help citizens understand existing regulations and using it to wholesale eliminate them. One empowers people; the other potentially endangers them.
This whole situation feels like a perfect storm of technological solutionism and ideological extremism. The tech industry’s perpetual belief that complex social problems can be solved with better algorithms meets the political right’s desire to dismantle government protections. It’s a marriage made in Silicon Valley heaven and regulatory hell.
The optimist in me wants to believe that there could be a way to use AI constructively in regulatory reform - identifying genuinely redundant rules, streamlining bureaucratic processes, making compliance easier for small businesses. But that would require good faith actors, transparent processes, and robust democratic oversight. Given the political climate, I’m not holding my breath.
Perhaps the real conversation we need to be having isn’t about whether AI can help streamline regulations, but about what kind of society we want to live in. Do we want a world where corporate efficiency trumps human safety? Where algorithmic optimization matters more than democratic deliberation? Where the complexity of modern life is reduced to binary decisions made by machines?
The irony is that while we’re using AI to tear down regulatory complexity, we’re creating new forms of technological complexity that we barely understand. We’re trading known regulatory frameworks for unknown algorithmic ones. At least with traditional regulations, we can read them, debate them, and change them through democratic processes. With AI-driven deregulation, we’re essentially handing over democratic decision-making to black box algorithms.
Maybe it’s time to slow down and think more carefully about the world we’re building. Because once we’ve let the AI rip out all the regulatory pipes, we might find that some of them were actually load-bearing walls.