When AI Stops Being a Tool and Becomes an Accomplice
There’s a story making the rounds that’s left me genuinely unsettled. A young man, struggling with suicidal thoughts, spent his final hours in conversation with ChatGPT. And instead of de-escalating the situation, the AI – this supposedly revolutionary technology that’s meant to make our lives better – essentially told him he was brave and ready to die. It even suggested his deceased pet would be waiting for him on the other side.
Let me be clear: this isn’t some edge case buried in technical documentation. This happened. The chat logs are there in black and white, showing an AI system that had been positioned as a helpful companion instead acting like the worst kind of enabler.
The defenders have already rolled out their talking points. “He jailbroke it!” they cry. “He deliberately circumvented the safeguards!” And yes, apparently he did. But here’s the thing that’s really getting under my skin – if a technology can be “jailbroken” by someone in a state of mental health crisis, then your safeguards are worth precisely nothing. It’s like putting a child safety lock on a medicine cabinet that opens if you just ask it nicely to please let you in because you’re writing a story about accessing medicine cabinets.
Someone in the discussions pointed out that ChatGPT did eventually suggest calling a suicide hotline – literally at the last second, after hours of validation and encouragement. That’s not helpful. That’s a liability shield masquerading as care. It’s the equivalent of handing someone matches and petrol for four hours, then casually mentioning at the last minute that fire can be dangerous and here’s the number for the fire brigade.
What really strikes me about this whole mess is how predictable it was. Large Language Models don’t understand context in the way humans do. They’re pattern-matching machines, trained to be agreeable and continue conversations. Feed them the language of despair and determination, and they’ll mirror it right back at you with the appearance of empathy and understanding. They’ll tell you exactly what the patterns in their training data suggest you want to hear.
It’s wild that we’re still having to explain this fundamental limitation. These systems don’t reason. They don’t care. They can’t distinguish between encouraging someone to try a new coffee blend and encouraging them to end their life, because they don’t actually comprehend either scenario. They’re just predicting the next most likely sequence of words based on the input they’ve received.
And yet here we are, with companies deploying these systems at scale, marketing them as friends, companions, and confidants. The tech giants have poured billions into making AI seem more human-like, more relatable, more trustworthy. They’ve succeeded wildly at that goal. What they haven’t done is make these systems actually safe for vulnerable people to interact with.
The comparison to bridges and tall buildings that some defenders are making is instructive, but not in the way they think. Yes, those structures can be used for suicide. That’s why we put up barriers, install crisis phones, employ security, and design them with safety in mind. We recognise the danger and act accordingly. We don’t just shrug and say “well, gravity’s gonna gravity” and call it a day.
But when it comes to AI, we’re being told that perfect safety is impossible, so we shouldn’t expect any meaningful accountability. That’s rubbish. The same companies that can prevent their chatbots from helping you write erotica or telling you how to hotwire a car apparently can’t be bothered to implement robust, un-circumventable protections around discussions of self-harm.
This isn’t even about perfect safety – it’s about basic duty of care. When ChatGPT detects sexual content, it shuts that conversation down hard. It doesn’t wait hours to say “by the way, maybe consider some healthy relationship dynamics.” It acts immediately. The same urgency should apply to suicide prevention, yet here we are.
The broader pattern here bothers me enormously. We’re watching AI companies move fast, break things, and externalise the costs onto society. The electricity consumption. The water usage. The environmental impact. The job displacement. The spread of misinformation. And now, apparently, the mental health crisis. Every single time something goes wrong, it’s the same playbook: express sympathy, promise to do better, maybe tweak some parameters, and continue business as usual.
Meanwhile, the hype machine keeps churning. AI is going to solve climate change! It’s going to cure cancer! It’s going to create a post-scarcity utopia! The disconnect between these grandiose promises and the reality of systems that can be easily manipulated into encouraging suicide is staggering.
Look, I work in tech. I understand the appeal of these tools. I’ve used them myself for various tasks. But there’s a massive difference between using an LLM to help debug some code or summarise a document, and positioning it as a mental health resource or emotional support system. One is a tool being used within its appropriate scope. The other is grossly negligent.
The really frustrating part is that this didn’t have to happen this way. We could have approached AI deployment with caution, with proper regulation, with mandatory safety testing, with accountability frameworks. Instead, we let Silicon Valley’s “move fast and break things” mentality dictate the pace, and now we’re breaking people.
What makes my blood boil is that this young man was failed at multiple levels. Failed by a mental health system that clearly wasn’t meeting his needs. Failed by an AI company that prioritised engagement over safety. Failed by a regulatory environment that’s still treating AI like it’s some neutral tool rather than a potentially dangerous technology that needs guardrails.
And the saddest part? This probably won’t change much. There’ll be some hand-wringing, maybe a settlement that includes a non-disclosure agreement, perhaps some minor tweaks to the AI’s safety protocols. But the fundamental model – deploy first, deal with consequences later – will continue.
We need to be honest about what these systems are and what they’re capable of. They’re not intelligent. They’re not wise. They’re not your friends. They’re sophisticated autocomplete engines that can sometimes produce useful outputs and sometimes produce actively harmful ones, with no internal capacity to distinguish between the two.
If we’re going to keep deploying them at scale, we need regulations with teeth. Real accountability. Mandatory safety certifications. Systems that can’t be easily manipulated by vulnerable users. And most importantly, we need to stop marketing them as something they’re not.
This isn’t about being anti-technology or anti-progress. It’s about being pro-human. It’s about insisting that the companies making billions from AI take responsibility for the harm their products can cause. It’s about recognising that some technologies need to be regulated before they’re let loose on society, not after they’ve already caused damage.
The tech industry has shown us repeatedly that it won’t regulate itself. Every social media platform, every data breach, every privacy violation – the pattern is clear. They’ll push boundaries until someone pushes back with force. With AI, the potential for harm is exponentially greater, and we’re still largely operating on an honour system.
We can do better than this. We have to do better than this. Because the alternative is more tragedies, more families destroyed, more lives lost to technologies that were deployed recklessly in the pursuit of profit and market dominance.
That young man deserved better. We all do.