Claude Opus 4.7 and the AI Treadmill We're All Running On
There’s a pattern emerging in the AI world that I find equal parts fascinating and exhausting, and the buzz around Anthropic’s upcoming Claude Opus 4.7 release has me thinking about it again over my morning batch brew.
According to The Information — which apparently is the tech journalism equivalent of the Oracle at Delphi — Anthropic is about to drop Opus 4.7 along with a new AI design tool aimed squarely at the presentation and website-building market. Tools like Gamma and Google Stitch are apparently in the crosshairs. And somewhere above all of this sits “Claude Mythos,” the mysterious flagship model currently being whispered about in hushed tones, apparently so powerful it’s being used to find security vulnerabilities by a select group of early partners. Very dramatic stuff.
But here’s what actually caught my attention in the discussion swirling around this announcement: the uncomfortable pattern of current models seemingly getting worse just before a new release. People are noticing it. Developers are filing detailed GitHub issues with hard data — thousands of sessions analysed, showing measurable degradation in coding behaviour. And the community response is a fascinating mix of cynicism, resignation, and dark humour.
One explanation floating around is that Anthropic dials back the “effort” parameter server-side on older models as infrastructure strains under massive user growth. Another theory, more cynical, is that making the current model look a bit ordinary before a release makes the new one seem more revolutionary by comparison. Whether it’s infrastructure triage or marketing sleight of hand, neither explanation is particularly flattering.
Working in IT, this pattern feels deeply familiar to me. We’ve seen it with software releases forever — the “encourage upgrade” degradation, the sudden discovery of bugs right before a new version ships. It’s not unique to AI. But what is unique here is the speed and the stakes. We’re not talking about a slightly clunkier version of Excel. We’re talking about tools that developers, designers, and knowledge workers are increasingly building their entire workflows around. When those tools get wobblier, real productivity takes a hit.
The design tool angle is genuinely interesting to me though, even if the “this will kill X startup” narrative is getting a bit tired. The promise of helping non-technical users build presentations, landing pages, and websites through natural language prompts isn’t new — most LLMs have been doing rough versions of this for a couple of years. The differentiator, if Anthropic pulls it off, would be the interactive editing layer: click an element, describe what you want changed, have it actually understand context and brand style. That would be genuinely useful. Right now, getting consistent, brand-appropriate output from AI design tools is still pretty mediocre, and anyone who’s had to wrangle AI-generated slide decks into something that doesn’t look like a fever dream knows exactly what I mean.
What I keep coming back to though is that comment thread reaction that distilled the anxiety a lot of people feel right now: “Why even try? AI is replacing you soon anyway.” It landed with a thud because it voices something a lot of people in creative and technical fields are quietly sitting with. And I don’t think the answer is easy reassurance. The disruption to design, to certain categories of development work, to content creation — it’s real and it’s accelerating.
But here’s where I land on it: the tools that genuinely lower barriers to creation are, on balance, a good thing. If someone with a great idea but no design skills can now produce a decent landing page without hiring an agency, that’s democratising something that was previously gated behind either money or years of Figma practice. The challenge — and this is where I think the conversation needs more energy — is making sure the economic benefits of that efficiency aren’t just hoovered up by the companies at the top of the stack while the people whose skills trained these models see nothing in return.
That’s the bit I want to see more policy attention on, frankly. Not panic, not a ban, but genuine thought about how the productivity gains get shared. We’ve had that fight before in other industries, and we know what happens when workers get left behind in a technological transition.
Opus 4.7 will drop, people will hit their rate limits immediately, someone will post that it’s a skill issue and they never hit limits on their $40/month plan, and the cycle continues. But underneath all the version number noise, something genuinely significant is being built. Worth paying attention to — even if the rollout is occasionally maddening.