Building Fences in the Digital Playground: One Parent's Solution to YouTube's Algorithm Problem
I stumbled across someone’s GitHub project the other day that got me thinking about the peculiar challenges of raising kids in 2025. A developer built an entire approval system for YouTube because they wanted their child to access educational content without getting sucked into the algorithm’s vortex of brain rot. It’s called BrainRotGuard, and it’s exactly what it sounds like – a parental gateway where every video request goes through Telegram for approval before the kid can watch it.
My first reaction was: bloody hell, that’s both impressive and slightly dystopian. My second thought: I completely understand why they did it.
The thing is, YouTube is genuinely useful. There’s incredible educational content on there – proper documentaries, science channels, history deep-dives that would make any parent proud. But it’s wrapped in this algorithm designed to maximise watch time at any cost. And let’s be honest, the algorithm doesn’t care if your kid starts on a video about dinosaurs and ends up three hours deep in Minecraft drama or some obnoxious gamer screaming about nothing.
The developer’s solution is technically elegant in its own way. Python backend, Telegram bot for notifications, Docker deployment, DNS blocking to prevent direct YouTube access. They’ve even added features like channel allow-lists, daily time limits per content category, and scheduled access windows. It’s self-hosted, open source, and runs on minimal resources. For a first project shared publicly, it’s actually quite thoughtful.
But here’s where I get a bit conflicted. Reading through the discussion thread, there was this parent who mentioned they’d hate having had their viewing habits monitored as a kid. That resonated with me. I remember discovering things on the early internet – not always great things, admittedly – and that sense of exploration was part of learning to navigate information. Part of growing up was making questionable choices and figuring out why they were questionable.
One commenter made an excellent point though: it’s not just kids who struggle with algorithmic manipulation. We all do. Even critical adults can click on something dumb and suddenly find their recommendations poisoned for weeks. YouTube Shorts alone is a perfect example of how these systems exploit our psychology. If adults with fully developed prefrontal cortexes struggle with self-control around this stuff, what chance does a teenager have?
The broader issue here isn’t really about parental control software or whether one family decides to gate-keep YouTube access. It’s about the fundamental design of these platforms. They’re built to be addictive. That’s not a bug, it’s the entire business model. Maximise engagement, maximise ad views, maximise profit. The wellbeing of users – especially young users – is at best a secondary concern, and at worst completely ignored.
From a slightly left-leaning perspective, this feels like another example of private companies externalising the costs of their business model onto society. YouTube doesn’t bear the cost of kids developing attention problems or being radicalised by recommendation algorithms. Parents do. Teachers do. Society does. Meanwhile, Google prints money from the ad revenue.
I work in DevOps, so I can appreciate the technical elegance of what this developer built. The fact that they shared it as open source means other families can benefit without YouTube having any say in it. That’s genuinely good. But it also saddens me that parents need to become system administrators just to provide their kids with a safe viewing experience.
The reality is that technological solutions like this are band-aids. They’re necessary band-aids, don’t get me wrong. But they don’t address the underlying problem: platforms designed to exploit human psychology for profit. What we actually need is regulation that forces these companies to prioritise user wellbeing over engagement metrics. Proper age-appropriate design requirements. Algorithmic transparency. Actual consequences when platforms harm young people.
Until we get that kind of regulatory framework – and let’s be honest, that’s a long way off – projects like BrainRotGuard are what parents are left with. I respect anyone who takes the time to build tools that help protect their kids, even if it means running a Docker container and approving videos via Telegram.
The developer mentioned their kid has started watching educational content again – dinosaurs, animals, how birds fly. That’s what they wanted in the first place. Sometimes the most radical act is just creating the space for curiosity without the noise of algorithmic manipulation drowning it out.
I don’t know if I’d deploy something like this myself. My daughter’s a teenager now, and we’ve managed through a combination of conversations, time limits, and hoping she’s internalised enough critical thinking. But every family’s different. Every kid’s different. If someone has built a tool that works for them and they’re sharing it freely, that’s a net positive for the community.
The real question is: why are parents having to build these solutions in the first place? Why doesn’t YouTube provide meaningful parental controls that actually work? We all know the answer. Because anything that reduces watch time reduces revenue. And that’s the heart of the problem.
At least there are developers out there using their skills to push back, even in small ways. In the absence of regulatory action or corporate responsibility, I suppose that’s something.