Could You Rip Out Your AI Vendor in 6 Months? The Pentagon Just Had To.
The biggest risk with AI tools isn't that they stop working. It's that they work so well you can't imagine going back.
That's exactly what happened at the Pentagon this month. Defense Secretary Pete Hegseth designated Anthropic a supply-chain risk on March 3, giving the entire military apparatus six months to rip out Claude. The reason was political, not technical. Anthropic refused to loosen safety guardrails for military use, and the administration decided that made them unreliable.
The response from inside the Pentagon tells you everything about AI vendor lock-in.
AI vendor lock-in hits different when the tool actually works
Pentagon IT contractors called the move stupid. Career technologists who had spent months getting operators comfortable with AI watched their progress evaporate overnight. One official told Reuters that tasks previously handled by Claude are now being done manually in Excel. Claude Code was widely used to write software across defense teams. Developers who built custom agents to sift through classified datasets lost all of that work.
The kicker: recertifying replacement systems for military use could take 12 to 18 months. Some teams are quietly slow-rolling the transition, betting that a deal gets struck before the deadline.
Palantir's Maven Smart Systems, a billion-dollar intelligence and targeting platform, was built with Claude Code workflows baked in. Rebuilding those integrations with a different model isn't a weekend project. It's months of work with no guarantee the replacement performs at the same level.
What this means for anyone building on AI
You don't have to run classified military networks to feel this. If you've built automations, custom agents, or internal tools on top of a single AI provider, you carry the same risk. The trigger might not be geopolitics. It could be a pricing change, a capabilities regression after a model update, or a terms of service shift that breaks your use case.
Here's one thing you can do today: pick your most critical AI workflow and ask what happens if that provider disappears tomorrow. If the answer involves weeks of rebuilding, you have a lock-in problem. The fix isn't necessarily switching providers. It's designing your prompts, agents, and integrations so the model-specific parts are isolated. Keep your business logic separate from your AI calls. Treat the model like a replaceable dependency, not a foundation.
I restructured my own Claude Code setup this way after watching a model update change behavior on prompts I'd spent weeks refining. The CLAUDE.md file, the skill definitions, the project structure: all of it is written so the instructions make sense to any capable model. Claude happens to be the best tool for the job right now. But "right now" is doing a lot of work in that sentence.
The window is open, but it won't stay open
The Pentagon story also revealed something encouraging buried in the chaos. Military users overwhelmingly preferred Claude over alternatives. One contractor said xAI's Grok produced inconsistent answers to the same query. When people who handle classified intelligence and weapons targeting say your AI tool is the best, that's a meaningful signal about where the capability frontier sits.
But capability alone doesn't protect you from disruption. The Pentagon had the best AI tool available and still got forced off it in a week.
Back to where we started: the risk isn't that AI tools break. It's that they work so well you forget to plan for the day you lose access. Whether you're running a defense contractor or a five-person startup, the move is the same. Build your AI stack like you might have to swap the engine while the car is moving.
If you're staring at an AI workflow that feels uncomfortably dependent on one provider, let's talk about it.