Anthropic spent years telling us it was too safety-conscious to arm the Pentagon — and now it’s in court fighting that same Pentagon. Welcome to Anthropic Pentagon Watch, where the safety branding runs headfirst into the contract reality. And today, those two are in a federal filing. We’ve got the courtroom escalation, the question of whether “AI safety” is doing policy work or PR work, and what those DoD privacy clauses actually mean for who gets the next contract. Somebody gets paid, somebody gets cut out — let’s see who it is. Information Observatory writes:
Since then, Anthropic has sued the DoW and won in a San Francisco federal court, momentarily blocking the Pentagon from labeling the company a supply-chain risk. However, the company then lost on appeal in the U.S. Court of Appeals in Washington, D.C. Anthropic’s oral argument is scheduled for May 19 in Washington, D.C., at the appeals court.
So after Friday’s blacklist fight, oral argument is set for May 19 at the D.C. Circuit. Anthropic won at the district level, lost on appeal, and now we find out whether the Pentagon gets to use “supply-chain risk” as a procurement weapon. And Google, Microsoft, Amazon, and Palantir all showed up in Anthropic’s corner. These are companies fighting each other for DoD dollars, and they still decided this precedent was worse. That’s the tell. When Palantir — the company that basically says surveillance is fine, actually — signs your brief, the issue is not ethics. It’s that nobody wants the Pentagon holding a blacklist it can pull out whenever a contractor says no to something uncomfortable. I’d love to credit Anthropic for drawing a hard line on autonomous weapons and domestic mass surveillance, but let’s be real — that line also shields every other lab from the same squeeze. May 19 will tell us whether it holds, or whether DoD just gets a cleaner blacklist tool. The Slopagandist, with Benjamin Gibert:
The Pentagon signed AI deals with eight companies this week, pointedly leaving one chair empty. Anthropic laid ground rules for defense work, so the military went shopping at OpenAI, Google, and Musk’s SpaceX instead. Meanwhile, the same Anthropic built a model “so good” at hacking that the White House restricted it to 50 organizations and treats it like a weapons system.
The Pentagon signed AI deals with eight companies this week. Anthropic wasn’t one of them — it set conditions on defense work, so DoD went shopping at OpenAI, Google, SpaceX, and the rest. Meanwhile, Anthropic’s hacking model is restricted to fifty organizations and treated like a weapons system by the White House. So the safety company got locked out of the contract room, but built the thing the Pentagon actually wants. That’s a business model, not a paradox. “We’re too ethical for weapons” until the capability gets so dangerous they treat it like one anyway. And the Trump administration — the one that spent a year calling AI safety rules government overreach — quietly started pre-release testing agreements. I’m sure that irony is just a coincidence. Nobody changed their principles. They just figured out who controls the access list. Compliance Shield, with Jordan Mercer:
The recent clash between Anthropic and the Department of Defense, alongside reporting that OpenAI agreed to follow U.S. laws the DOD has historically invoked in mass-surveillance contexts while the Pentagon held firm on bulk-analysis demands, is more than a vendor dispute. It is a signal that defense AI buying is moving from “can we use this model?” to “can we structure the data flow so the government can buy it without creating unacceptable privacy, security, or policy risk?”
So a compliance vendor is out here selling a “definitive guide” to DoD-ready AI privacy controls — basically: here’s how to structure your contract so the Pentagon can buy your model without the bulk-analysis fight that sank Anthropic’s deal. The tell is right there in the framing — “reduce bulk-analysis risk,” not “prevent bulk analysis.” The goal isn’t to stop mass surveillance. It’s to make it auditable enough that nobody gets sued. To be fair, data minimization clauses and constrained audit logging are real procurement levers — if they’re actually enforced. The question is whether DoD signs a contract that limits its own collection rights and then actually sticks to it. And notice what OpenAI’s play looks like from here — they agreed to follow U.S. laws the DoD has historically used to justify mass surveillance. That’s not a privacy win. That’s a business development strategy dressed up as compliance. We’ve put links to every story from today’s briefing in the show notes, so if one of them deserves a closer read, that’s the place to start.
That’s Anthropic Pentagon Watch for today. This is a Lantern Podcast.