← Anthropic Pentagon Watch

Pentagon AI Push Widens as Anthropic Fight Moves to Vetting (May 07, 2026)

May 07, 2026 · 8m 58s · Listen

The Pentagon's AI vendor list just got eight names longer — and Anthropic still didn't make the cut. Welcome back to Anthropic Pentagon Watch. Today: a blacklisting, a White House AI security order that's being floated, and a new autonomous warfare command that raises a lot more questions than DoD is answering. And now SpaceX is helping bankroll Anthropic's data centers while Musk is suing their biggest competitor. That conflict-of-interest map is a mess, and we're going to trace the whole thing. The vetting fight is where the leverage is. So let's start there. Here's Byteiota:

Anthropic, maker of Claude AI, was deliberately excluded from these Pentagon AI deals after refusing to allow mass surveillance of Americans and fully autonomous weapons. The Pentagon’s response? Designate Anthropic a “supply chain risk,” terminate a $200 million contract, and ban all military contractors from using their products.

On May 1st, the Pentagon announced eight companies — Microsoft, Amazon, Google, OpenAI, Nvidia, Oracle, Reflection, SpaceX — were cleared for classified military networks up to TOP SECRET. Anthropic isn't on that list, and the reason is now in federal court. Anthropic said no to mass surveillance of Americans and no to fully autonomous kill decisions. The Pentagon answered by labeling them a supply chain risk — a designation they usually reserve for Chinese state-linked vendors — and canceling a two-hundred-million-dollar contract. That's punishment, not procurement. A federal court in California has apparently agreed, finding the government's actions were retaliatory in nature. 'Supply chain risk' is a serious legal designation with real blacklist consequences — using it to discipline a domestic company for an ethics stance is a major escalation in how that tool gets used. And notice who got the deal instead. OpenAI — the company that spent years saying it existed to prevent existential AI risk — is now on classified military networks, and nobody's saying what constraints, if any, come with that. Eight companies decided the contract was worth more than the hard lines Anthropic drew. From Hadriana Lowenkron at Insurance Journal:

The White House is looking into an executive order that would create a vetting system for new artificial intelligence models like Anthropic PBC’s Mythos in a bid to protect business and government networks from AI-related cyber risks, a top economic adviser said Wednesday.

We flagged the voluntary vetting thread last edition, and now it's got actual teeth — or at least the threat of teeth. The White House is weighing an executive order that would create a formal clearance process for new AI models, with Anthropic's Mythos as the obvious test case. The FDA analogy is doing a lot of work here. FDA approval is slow, expensive, and companies hate it — so either Hassett doesn't know what he's proposing, or this is meant to sound serious without really constraining anybody. What's concrete so far is this: Anthropic already limited Mythos access to a short list of large tech and financial firms, and the administration wants to route it into federal agencies for government network testing. So the executive order would mostly formalize a process that's already being improvised. Which means Anthropic gets to help write the vetting rules for Anthropic's model. That's not regulation — that's a moat with paperwork. From Al Jazeera:

Under the agreement announced on Wednesday, Anthropic will use the full computing power of SpaceX’s Colossus 1 facility in Memphis, Tennessee, which houses more than 220,000 Nvidia processors and will give the Claude chatbot maker 300 megawatts of new capacity within a month. That’s enough electricity to power more than 300,000 homes – as the Dario Amodei-led company seeks to boost the capacity of its Claude Pro and Claude Max AI assistants for subscribers.

Anthropic just signed a deal to run on SpaceX's Colossus data center in Memphis — 220,000 Nvidia chips, 300 megawatts of capacity. Same Elon Musk who's actively suing OpenAI, and who has not exactly been quiet about his contempt for the AI safety crowd. So Anthropic — the company that built its brand on being the responsible, safety-first alternative — is now renting compute from a guy suing its main competitor while trying to launch his own competing AI lab. That's not a detente. That's a business arrangement with a live grenade on the table. The timing matters too. SpaceX is eyeing an IPO, Anthropic needs scale to keep up with GPT and Gemini, and a 300-megawatt capacity bump in under a month is not something you walk away from because you dislike the landlord's politics. Right, but Anthropic's whole procurement pitch to the Pentagon and enterprise clients is, 'trust us, we're the careful ones.' Who you run your infrastructure on is a policy question — and they just answered it. From Sean Lyngaas at CNN:

The Iran war has seen the US military use AI more than any conflict before, drawing on vast amounts of data — from satellites, signals intelligence and elsewhere — piped into software programs made by contractors like Palantir. AI tools like Anthropic’s Claude have sifted through the data far quicker than any human could to flag potential targets to strike for commanders, according to multiple sources familiar with US operations.

The Pentagon's line on AI in the Iran war is basically: trust us, the humans are still in the loop. What they won't say is how many seconds that human gets to review a target flagged by a Palantir dashboard before hitting approve. And Anthropic's Claude is in that pipeline — sifting targeting data. So the next time Anthropic talks about responsible AI deployment, somebody should ask them straight up whether 'responsible' includes flagging airstrike candidates in a war that just killed 168 kids at an elementary school. Hegseth's 'we follow the law' line is doing almost no work at all. Which law? What rules of engagement? What's the accountability structure when the AI flags wrong and the human rubber-stamps it in thirty seconds? The contractors get paid either way. Palantir's stock doesn't care whether the strike was accurate. That's the incentive structure nobody wants to talk about in this whole 'humans make the final call' framing. Scott A. Freling, Stephanie Barna, Elizabeth Witwer, writing in Inside Government Contracts:

On April 29, 2026, Secretary of War Pete Hegseth told the House Armed Services Committee that the Pentagon will “shortly announce a sub-unified command of autonomous warfare.” The announcement came as the Department of War (DoW) unveiled its fiscal year (FY) 2027 budget request, which proposes approximately $54 billion for the Defense Autonomous Warfare Group (DAWG)—a dramatic increase from the roughly $226 million the DAWG received previously.

Pete Hegseth told the House Armed Services Committee the Pentagon will 'shortly announce a sub-unified command for autonomous warfare' — and the FY2027 budget puts $54 billion behind that sentence, with the full drone and counter-drone stack approaching $74 billion. For context, the Defense Autonomous Warfare Group was pulling about $226 million before this. That's not a budget increase, that's a category change. A sub-unified command means autonomous warfare gets its own home inside the joint force — its own chain of command, its own contracting surface, its own budget gravity. Once that structure exists, it doesn't shrink. The question isn't whether this is big. It's who writes the rules of engagement for machines that kill people, and whether Congress ever actually votes on that. And 'shortly announce' is doing a lot of work here — no location confirmed, no authority structure published, no congressional authorization language yet. Seventy-four billion dollars looking for an org chart. You'll find links to all of today's stories in the show notes, so if one caught your attention, you can follow it there and read further. That's Anthropic Pentagon Watch for this Thursday, May 7th. This is a Lantern Podcast.