The Great AI Cold War: When Geopolitics Meets Machine Learning
There’s a conversation happening in the AI community right now that’s making me increasingly uncomfortable, and it’s got nothing to do with whether machines will eventually take over the world. It’s about nationalism, paranoia, and how we’re letting geopolitics strangle technological progress.
Picture this: you’re working with clients who need AI solutions that are completely air-gapped—no cloud services, no data leakage, ever. National security type stuff. Your only option is open-weight models running in closed environments. Sounds straightforward enough, right? Except there’s a catch: your clients won’t touch Chinese models with a ten-foot pole. “National security risk,” they say, as if the model weights contain some sort of digital time bomb waiting to unleash chaos.
The problem is that the most capable open-weight models right now are coming out of China—Qwen, DeepSeek, GLM. Meanwhile, the best recent offering from the US side is gpt-oss-120b, which is about as cutting-edge as my daughter’s last year’s iPhone. We’re talking orders of magnitude behind in capability.
So what are organisations supposed to do? Keep using increasingly obsolete American models and watch their capabilities slowly drift into irrelevance? It’s like insisting on using a Nokia 3310 because you’re worried that a Samsung might be secretly recording your conversations—except in this case, the Nokia can barely make calls anymore.
The whole situation reminds me of working in IT during the early 2000s when people were convinced that open source software was somehow less secure than proprietary alternatives. “How can we trust code that anyone can see?” they’d ask. Well, that’s precisely why you can trust it—because anyone can audit it. The same logic applies here. These are open-weight models. Researchers, companies, and hobbyists around the world are poking at them constantly. If there were hidden backdoors or trojans, someone would have found them by now.
One discussion I came across suggested the brilliant workaround of taking a Chinese model, fine-tuning it slightly, and rebranding it as “Trump_FREEDOM_LLM” or some such patriotic nonsense. It’s funny because it’s almost plausible—slap an American flag on it and suddenly it’s acceptable. But here’s the thing: that approach would never survive an actual audit in healthcare, banking, or government sectors. And rightly so, because it’s fraud.
What really gets under my skin is the double standard at play. Nobody’s worried about backdoors in Llama models, or asking whether Meta has trained special triggers into their weights. We’re not treating European models like Mistral with the same suspicion, even though France has its own geopolitical interests. The paranoia seems to be specifically reserved for Chinese technology.
Look, I get it. China is a geopolitical rival. There are legitimate concerns about data sovereignty and national security. But when we’re talking about open-weight models running completely offline on air-gapped systems, the threat model changes dramatically. The weights are there for anyone to examine. The code is auditable. If you’re really concerned, you can fine-tune the model, scrub the weights, build your own LoRAs, and create a RAG database tailored to your specific use case. This is what enterprise AI should look like anyway.
Here’s what bothers me most: this isn’t really about security. It’s about optics and politics. Someone mentioned that the real solution for these security-conscious organisations should be to modify any model—regardless of origin—and ensure there’s no suspicious code or behaviour. That actually makes sense. But that’s not what’s happening. Instead, we’re getting a blanket ban based on country of origin, as if AI models carry passports.
The irony is that if China could create sleeper agents in their models—trojans so sophisticated they survive fine-tuning, remain undetectable to millions of users, and only activate under specific conditions while maintaining top-tier performance—then they’d have essentially solved the AI alignment problem. They’d be so far ahead in the AI race that nothing else would matter anyway.
Meanwhile, we know for a fact that American tech companies are logging every interaction with their models, building detailed profiles on users, and cooperating with government surveillance programs. But apparently that’s fine because it’s our government doing the spying.
The situation puts organisations in an impossible bind. Use increasingly outdated American models and fall behind competitors? Try to pass off Chinese models as American and risk fraud charges? Hope that Cohere in Canada or someone else fills the gap? It’s a mess, and it’s entirely self-inflicted.
What we need is a rational approach to AI security that’s based on actual threat modelling rather than reflexive nationalism. Audit the weights. Test the behaviour. Build proper security practices around model deployment. Do all the things you should be doing anyway, regardless of where the model came from.
Instead, we’re apparently heading toward an AI cold war where countries will refuse to use each other’s models on principle, even when those models are demonstrably better and completely transparent. It’s the kind of knee-jerk protectionism that will leave us with inferior tools while congratulating ourselves on being “safe.”
There’s a list floating around of excellent open-weight models from the US—Llama, Phi, Nemotron, OLMo, and others. That’s great. I hope they continue to improve and remain competitive. But let’s not pretend that banning Chinese models from consideration is about anything other than politics. The weights are open. The code is auditable. The threat model for offline, air-gapped deployments is completely different from cloud services.
Sometimes I think we’re learning all the wrong lessons from history. The Soviets rejected “capitalist science” and kneecapped their agricultural research for a generation. Now we’re in danger of rejecting superior AI models because they come from the wrong country, even when they’re completely transparent and auditable.
If we want to compete in AI, we need to actually compete—build better models, invest in research, support open source development. Banning the competition’s open-weight models from our own air-gapped systems isn’t strategy; it’s just shooting ourselves in the foot while claiming it’s for security.
The AI race is going to be won by whoever builds the best technology and deploys it most effectively. Right now, we’re too busy worrying about where the models come from to focus on what they can actually do.
Tags: AI, technology, geopolitics, open-source, national-security