The Pentagon’s CTO just said out loud what the contracts have been saying quietly: Anthropic is out, and nobody’s pretending there’s a path back. Welcome to Anthropic Pentagon Watch — I’m Cassidy, with Devin. And today is a pretty neat little case study in how fast a so-called safety-first lab turns into a cautionary tale in a procurement fight. We’ve got the Pentagon widening its AI vendor list, a White House order coming together, and a legal classification fight that somebody’s going to regret. Stick around. First up: the CTO’s remarks, what Mythos actually means for Anthropic’s federal business, and who’s already moving in to fill that contract gap. Here’s Miranda Nazzaro at The Hill:
When Sanger brought up whether Anthropic could be part of these deals one day, Michael said, “not at the Department of War.”
Just more than two months have passed since the Pentagon blacklisted Anthropic from its military work after a dispute with the company over the potential use of its AI models for domestic surveillance or fully autonomous attacks.
Quick update on yesterday’s blacklist thread: the Pentagon’s CTO just shut the door. Emil Michael, the undersecretary for research and engineering, said it flat out — not at the Department of War. That’s not a negotiating posture, that’s a procurement obituary. And while Anthropic is frozen out, Microsoft, Google, and Elon Musk’s xAI all just signed deals for classified network access. So the companies with zero public objections to autonomous weapons or domestic surveillance are in, and the one that raised those concerns is out. Follow the incentives. Michael’s framing is interesting, though. ‘Single-threaded’ is vendor lock-in language, not national security language. The Pentagon learned a supply-chain lesson, and Anthropic is the cautionary tale they’re using to sell a multi-vendor strategy. Sure, but that lesson conveniently punishes the lab that pushed back on kill-chain autonomy. The Pentagon gets to call it procurement discipline while making an example. Two birds, one blacklist. This one’s from Noah News:
The US Department of Defense has told a federal appeals court that Anthropic’s fast-moving model development, combined with what it described as a breakdown of trust between the company and the Pentagon, supported its decision to classify the AI developer as a supply chain risk.
The Pentagon has classified Anthropic as a supply chain risk, and its argument to a federal appeals court comes down to this: too much model churn, not enough trust. That’s a procurement designation usually reserved for foreign adversary hardware, and now it’s being applied to a San Francisco AI lab. Let’s be precise about what caused the trust breakdown. Anthropic said no to mass surveillance and fully autonomous weapons, and the DoD responded by threatening to lock them out of federal contracts. That’s not a supply chain problem — that’s punishment for having red lines. The legal question is whether the government can use contracting designations as a lever against vendors who won’t comply with end-use demands. Anthropic says there’s no sound legal basis, and that’s the interesting fight here — not the usual politics. If the DoD wins this, every AI company now knows the price of a conscience. You want federal revenue? Drop the autonomous weapons limits. That’s the precedent sitting in this filing. Here’s Insurance Journal:
The White House is looking into an executive order that would create a vetting system for new artificial intelligence models like Anthropic PBC’s Mythos in a bid to protect business and government networks from AI-related cyber risks, a top economic adviser said Wednesday.
The White House is floating an executive order that would require AI models to go through a security vetting process before public release. Anthropic’s Mythos is the model in the room, given its reported ability to find network vulnerabilities at scale. So Anthropic discloses a cyberweapon-adjacent model, limits access to a few big banks and tech firms, and the government’s response is to fast-track it into federal systems for ‘testing.’ That’s not regulation — that’s a procurement pipeline with a press release stapled to it. The FDA analogy Hassett floated is doing a lot of work here. The FDA can pull a drug. It’s not clear what ‘pulling’ a deployed AI model that’s already mapped federal network vulnerabilities actually looks like. And who runs the vetting? Because right now it sounds like the answer is Anthropic, the White House, and whoever got early access — which is exactly the group with the least incentive to find a reason to say no. TRT World writes:
The United States military has taken a defining step towards becoming what it calls an " AI-first fighting force." This month, the Pentagon announced agreements with eight of America's most powerful technology companies to deploy artificial intelligence directly onto its most classified military networks.
The Pentagon just handed eight AI companies keys to its most classified networks — Impact Level 6 and 7, meaning secret and above. Google, Microsoft, OpenAI, Amazon, Oracle, Nvidia, SpaceX, and a startup called Reflection that most people haven’t heard of. A startup called Reflection getting access to the Pentagon’s crown jewels alongside Google and Microsoft — and nobody’s asking who’s backing that company, or what oversight looks like for a firm nobody’s vetted publicly? Anthropic is notably not on that list, and they’ve been loudly drawing lines against autonomous weapons and mass surveillance — which is either principled or a very convenient PR position, depending on how cynical you are. It’s both. The ethics language is real, and the brand differentiation is real — those two things aren’t mutually exclusive. But the question I want answered is: who inside the Pentagon actually decides when a machine recommendation becomes a trigger pull? From Tastytech:
The phrase “any lawful use” formed the centre of the recent disagreement between Anthropic AI and the US administration, with CEO Darius Amodei claiming that it would let the US government use Anthropic technology to subject the American civilian population to surveillance, and produce autonomous weapons, areas of Anthropic’s use that he wanted walled off.
The Pentagon just added four more AI vendors to its classified-use roster — Microsoft, Amazon, Nvidia, and Reflection AI, which apparently doesn’t have a public product yet, but sure, classified operations, why not. They join OpenAI, xAI, and Google under a blanket ‘any lawful use’ authorization. And that exact phrase — ‘any lawful use’ — is the whole Anthropic fight in three words. Dario Amodei drew lines around civilian surveillance and autonomous weapons, the Pentagon said no to the lines, and now Anthropic is a supply chain risk and apparently also woke. That’s the official position of the United States Department of Defense. To be clear: Anthropic lost a two-hundred million dollar contract, took it to court, and is now the first US-based company ever labeled a supply chain risk. Meanwhile Reflection AI — again, no public model — just got cleared for classified work. The bar for trusted vendor is apparently: don’t ask what we’re doing with it. Everyone praising Anthropic’s safety culture should clock what actually happened when they tried to enforce it contractually. They got cut, sued, and called a national security liability. That’s the real stress test of AI safety commitments — and the market just graded it. You’ll find links to all of the stories we covered today in the show notes, so if one deserves a closer read, that’s the place to pick it up.
That’s Anthropic Pentagon Watch for today. Thanks for listening, and have a good Friday. This is a Lantern Podcast.