When Doritos Become Deadly: The Terrifying Reality of AI Security Theatre
There’s a story doing the rounds that perfectly encapsulates everything that frustrates me about the current intersection of AI hype, security theatre, and policing in America. A teenager was swarmed by eight police officers with guns drawn at his school. His crime? Having a bag of Doritos in his pocket that an AI-powered camera system flagged as a weapon.
Let me repeat that: a bag of chips was mistaken for a gun by artificial intelligence, and the response was to point multiple firearms at a child.
The part that really gets under my skin isn’t even the initial AI failure. Technology makes mistakes – I work in IT, I know this better than most. What absolutely infuriates me is what happened after. According to reports, the school’s security department reviewed the alert and cancelled it after confirming there was no weapon. But then someone decided to call in the police anyway, who showed up with guns drawn to “confirm” what had already been confirmed.
This kid stood there, terrified for his life, with multiple firearms pointed at him. Over chips. And here’s the kicker: the AI company said “everything worked as intended.” The school never apologized. The officers never apologized. Everyone just shrugged and moved on, leaving this teenager to process the trauma of nearly being shot at his own school while trying to get an education.
Reading through the various accounts of people’s experiences with security screening – from TSA horror stories to body scanner mishaps – a pattern emerges. It’s not really about safety. It’s about conditioning people to accept constant surveillance and arbitrary authority. When a TSA agent spends five minutes demanding to know where a knife is that doesn’t exist, then gets angry when they can’t find it, that’s not security. When someone gets nearly strip-searched over an eyeglass case, that’s not protection. It’s power for power’s sake.
The really insidious part is how this gets normalised. We’ve created systems where false positives are shrugged off as “the price of safety,” but we never seem to tally up the actual cost. That teenager is going to carry this experience with him. Every time he enters that school, he’ll remember the moment armed officers surrounded him because an algorithm made a mistake. How is he supposed to feel safe learning in that environment?
This is compounded by the fact that when these systems flag something, there’s no accountability mechanism. The AI company that makes this technology apparently has a tiny operation – just seven people in leadership running out of a small apartment in Virginia. They’ve essentially created an ImageNet classifier that calls the cops when it sees the word “gun” in its results. That’s the level of sophistication we’re talking about. And yet, schools are deploying this technology with real-world consequences for real children.
Someone in the discussion made an astute observation about computer vision AI in industrial settings. Even when these systems are well-trained to look at functionally identical objects, they regularly make mistakes. The evaluation process is essentially a black box – you feed it good and bad images, but how it makes its decisions is anyone’s guess. It often picks up on inconsequential details and gives them far too much weight.
Now imagine deploying that same unpredictable technology in a school environment, where the consequences of a false positive include armed police confronting children. The risk-reward calculation is completely out of whack.
What frustrates me even more is the double standard in how these systems are deployed. When it comes to actually dangerous situations – like an active shooter at a school – we’ve seen law enforcement wait outside while children are murdered. But point an AI camera at a kid with Doritos? Eight officers respond immediately with guns drawn.
There’s also a racial dimension here that can’t be ignored. The student in this case was Black. We know from countless studies and real-world examples that surveillance technologies and policing both disproportionately target people of colour. Combining the two creates a feedback loop of bias and harm.
The fundamental problem is treating the output of any detector – AI or otherwise – as absolute truth rather than as a possibility requiring human judgment. But we’ve created systems that actively discourage human judgment. Once an alert goes out, the bureaucratic machinery starts moving, and nobody wants to be the person who dismissed a potential threat, even when common sense says it’s nonsense.
This is what happens when we prioritise the appearance of security over actual safety. We spend money on flashy AI systems that demonstrably don’t work properly, we train (or fail to train) security personnel to treat every alert as gospel, and we create environments where children can be traumatised by their own schools in the name of protection.
The solution isn’t just better AI. The solution is fundamentally rethinking how we approach security in schools and public spaces. It means proper training for security personnel and police on de-escalation and proportional response. It means building in actual accountability when systems fail. It means acknowledging that false positives have real costs, and those costs need to be weighed against any potential benefits.
Most importantly, it means recognising that we’re raising a generation of kids in environments that treat them as potential threats. We’re teaching them that surveillance is normal, that their bodies can be searched at any time, that arbitrary authority must be obeyed without question. And then we wonder why they grow up anxious and distrustful.
That kid deserves an apology. He deserves compensation for what he went through. And every school using these systems needs to seriously reconsider whether the “security” they provide is worth the trauma they inflict. Because right now, the only thing working “as intended” is a system that values the appearance of safety over the wellbeing of actual children.
What are your experiences with security theatre or AI systems in daily life? Have you or your kids encountered similar situations? I’d be interested to hear your thoughts in the comments.