Posts / ai
Who's Actually Responsible? A 1979 IBM Manual and the Question We Keep Dodging
Someone shared a page from an IBM training manual from 1979 recently, and it’s been rattling around in my head ever since. The gist of it was simple: computers can process information, but a human being must always be accountable for the decisions made from that information. Seems reasonable, right? Almost obvious, even.
And yet here we are, 46 years later, and that principle feels more like a quaint relic than a guiding philosophy.
What struck me most wasn’t the manual itself — it was the conversation around it. Someone laid out a fairly savage little list: think of all the bankers jailed after the 2008 financial crisis. Think of the Sackler family losing their fortune over OxyContin. Think of Boeing executives facing real consequences after the 737 MAX disasters killed 346 people. Of course, the punchline is that basically none of that actually happened in any meaningful way. The bankers got bonuses. The Sacklers negotiated settlements. Boeing paid fines that amounted to a rounding error.
The IBM manual assumed accountability was a functioning mechanism. It isn’t, and honestly, it probably wasn’t even then.
One comment that really landed for me pointed out something sharp: “The people who would have been held responsible in 1979 are described as bottlenecks in a workflow today.” That’s not just a clever observation — it’s a genuine diagnosis of where we’ve ended up. We’ve reframed human judgement as an inefficiency to be optimised away, rather than a safeguard worth preserving. And the economic logic is brutal. If one firm automates its decision-making and gains a competitive edge, the firm that insists on meaningful human oversight gets left behind. The principle doesn’t die through malice, it dies through market pressure.
Which brings me to the AI piece, because this is where it gets personal for me. I work in IT. I’ve spent years building systems, automating workflows, pushing things into pipelines. I find AI genuinely fascinating — the pace of development in the last few years has been extraordinary, and I’d be lying if I said I wasn’t excited by a lot of it. But I’m also increasingly uneasy, not because the technology is inherently evil, but because we’re deploying it into a world where accountability was already broken before AI entered the picture.
We’re not introducing AI into some well-regulated, ethically robust corporate environment. We’re dropping it into the same ecosystem that produced the 2008 crash, the opioid crisis, and the Boeing cover-ups. The same ecosystem where, as one commenter dryly noted, responsibility has a funny habit of sliding downhill — past the executives, past the board, past the shareholders — until it lands squarely on the people actually doing the work.
There’s also a historical footnote in that discussion that I can’t just gloss over. IBM’s founder Thomas Watson personally ran a punch card business that helped the Nazi regime manage its census, ghetto, and railway systems. He accepted a medal from Hitler in 1937. He died enormously wealthy. IBM never formally admitted wrongdoing. That’s not ancient, abstract history — that’s a concrete example of a technology company providing infrastructure for atrocities, facing zero meaningful accountability, and continuing to thrive. If that doesn’t make you think carefully about the question of “who is responsible when technology causes harm,” I don’t know what will.
The instinct to say “well, the creator of the tool should be accountable” is understandable, but it dissolves quickly in practice. Tools get licensed, resold, repurposed, deployed by third parties, integrated into systems their original creators never imagined. The chain of responsibility gets long and murky very fast, which is exactly what certain parties prefer.
None of this means I think we should freeze AI development — that’s neither realistic nor necessarily desirable. But the 1979 IBM manual was onto something that we’ve been actively dismantling. If we’re serious about human accountability in AI systems, we need it baked into regulation, not left as a polite suggestion in a training manual that people will share on Reddit four decades later as a curiosity.
The framework existed. We just chose not to enforce it. Maybe the more honest conversation isn’t about what AI can do — it’s about whether we’re actually willing to hold anyone responsible when it goes wrong. Based on the track record, that’s the harder problem by far.