Spending $500 a Day on AI Tokens: Genius Move or Just Bad Maths?
There’s a screenshot doing the rounds on social media lately — someone flexing a $500-a-day Claude API bill as proof that building your own SaaS with AI is smarter than paying $49 a month for an existing product. The original post frames it as some kind of revolutionary insight. “The End of Software,” they declared. I’ll admit, when I first saw it, my reaction was somewhere between genuine curiosity and mild secondhand embarrassment.
Let’s do the maths together, because apparently not everyone did before posting. Five hundred dollars a day is roughly fifteen thousand dollars a month. The SaaS product you’re replacing costs $49 a month. Even if you only ran those API costs for a week, you’ve already blown past a year of the subscription you were trying to avoid. This isn’t a hot take about the future of software — it’s a cautionary tale about not understanding your own tooling costs.
That said, I don’t want to completely pile on, because there’s a kernel of a legitimate conversation buried in here. The discussion that erupted in the comments was actually more interesting than the original post. Some folks pointed out that spending big on tokens can make sense — if you’re processing thousands of documents, transforming unstructured data into JSON at scale, doing the kind of heavy analytical lifting that would otherwise require a team of people. One commenter made the point well: examining 200 spreadsheets, analysing them, turning them into structured data — that’s real work with real ROI. Context matters enormously.
But then there’s the other camp, and honestly this resonates with me from what I see in the industry. People burning through tokens because they’ve completely outsourced their thinking to the model. Not just the hard stuff — everything. Never clearing the context window, using the most expensive model for tasks that a cheaper one handles perfectly well, and then wondering why the bill looks like a mortgage payment. One comment that stuck with me pointed out that some people won’t even test or debug their own code anymore. They just keep feeding errors back to the LLM and hoping it eventually sorts itself out, which means one conversation can spiral into an expensive loop of the model trying to fix its own previous mistakes.
Working in IT, I’ve watched this pattern emerge with basically every productivity tool that’s ever come along. When cloud computing first became accessible, we had teams spinning up enormous instances for workloads that could have run on a laptop. Same energy. The tool isn’t the problem — the lack of discipline around how you use it is the problem.
The local model conversation is worth paying attention to too. Someone dropped a fairly detailed comparison between Qwen 3.6 running locally versus Claude Sonnet, and the numbers are genuinely interesting — around 85-90% of the capability for zero ongoing cost once you’ve made the hardware investment. That’s not nothing. For a lot of everyday development tasks, “almost as good but free” is a very compelling proposition. The counter-argument — that you can’t run state-of-the-art models locally yet for complex tasks — is also true, but the gap is closing faster than most people expected. Given my environmental concerns about the data centre footprint of all this cloud AI inference, I find the local model conversation particularly appealing, even if my current hardware isn’t exactly a 4090 workstation.
The most sensible comment in the whole thread came from someone who’d been CTO of a financial services company. They made the distinction between “vibe coding” — just throwing prompts at a model and hoping something useful comes out — versus using AI as a sophisticated layer in a well-engineered workflow built on years of experience. That distinction matters. The productivity gains are real when you actually know what you’re doing and use AI to accelerate genuine expertise. They’re much less real, and considerably more expensive, when you’re using AI as a substitute for understanding what you’re building.
My daughter is sixteen and already deeply interested in software. Watching her generation approach AI tools is fascinating — they’ve grown up with them in a way that my cohort simply hasn’t. But the thing I keep coming back to, whether I’m thinking about her future or just reflecting on my own work, is that the fundamentals still matter. Understanding what a tool costs. Understanding when a cheaper or simpler option is good enough. Understanding that the metric isn’t how much you spend — it’s what you actually ship.
So no, this isn’t the end of software. It’s just software with a higher electricity bill if you’re not paying attention. The $49-a-month SaaS product you’re sneering at was probably built by someone who understood exactly when to use which model, kept their costs sensible, and shipped something people actually want to use. That’s still the job.