The Pentagon Banned Anthropic and OpenAI Accepted the Same Terms Hours Later

Written by: 
Surag Patel
Published on: 
Mar 4, 2026
On This Page
Share:

Your AI vendor's policy stance just became a procurement risk factor. Not a theoretical one. The kind where a $380 billion company loses its entire federal market overnight and your security team has six months to rip out every tool that touches its models.

Crucially, the vendor that accepted the Pentagon's terms may have created a bigger problem than the one that refused. OpenAI's deal delivers stronger practical controls than traditional procurement models, but weaker legal protections than what Anthropic demanded. The enforcement gap between technical guardrails and contractual obligations is what should be keeping security leaders up at night -- and it represents an entirely new category of AI governance risk that most organizations have not accounted for.

What Happened (and Why the News Coverage Missed the Point)

On February 26, 2026, Anthropic CEO Dario Amodei publicly rejected the Pentagon's demand that Claude be available for "all lawful purposes" without contractual restrictions. Anthropic wanted two specific guardrails written into the contract: no domestic mass surveillance and no fully autonomous weapons. Not policy suggestions. Enforceable contract terms.

The Pentagon responded immediately. Defense Secretary Hegseth designated Anthropic a "supply-chain risk to national security." All federal agencies now have six months to phase out Anthropic technology. Hours later, OpenAI published "Our agreement with the Department of War", accepting the same basic framework Anthropic refused while claiming it had secured identical ethical restrictions through a different mechanism.

Every outlet covered that story. None of them adequately explained why OpenAI's "different mechanism" matters more than the rejection itself. For a broader look at how this connects to the AI agent attack surface expanding across the industry, see our full analysis in this week's AppSec Weekly briefing.

The Enforcement Gap: Why Identical Restrictions Produce Different Risk Profiles

Both companies claim the same three restrictions: no mass surveillance of U.S. citizens, no fully autonomous weapons, no high-stakes automated decisions without human oversight. Identical restrictions. Divergent enforcement mechanisms.

Anthropic's approach: write the restrictions into the contract as legally binding terms. If the Pentagon violates them, Anthropic has standing to terminate the agreement and pursue legal remedies. Standard model. Also the model that got Anthropic banned from the federal market.

OpenAI's approach: accept the Pentagon's "any lawful use" standard, retain "full discretion over the safety stack" in a cloud-only deployment, and rely on existing law plus internal policy to prevent misuse. Sam Altman framed the distinction as "citing applicable laws" versus "specific prohibitions in the contract."

As MIT Technology Review's analysis put it: "This is exactly what Anthropic feared: restrictions that exist in spirit but lack contractual enforceability."

OpenAI's cloud-only model actually provides better real-time oversight than Anthropic's contractual approach would have. Because every Pentagon query runs through OpenAI's infrastructure, OpenAI can monitor usage patterns, flag anomalies, and unilaterally shut down problematic applications. A contract clause cannot match that level of practical control. A contract tells you what happened after a violation. A cloud-controlled safety stack can prevent the violation from executing.

Practical control is not legal obligation, though. If the Pentagon pressures OpenAI to loosen its safety stack six months from now, OpenAI's only recourse is to walk away from the contract. Exactly the position Anthropic took from the start, except after already establishing "any lawful use" as the baseline. The power asymmetry compounds over time. As OpenAI integrates deeper into Pentagon workflows, threatening withdrawal becomes less credible.

Stronger real-time controls paired with weaker structural protections. This enforcement gap has implications far beyond the Pentagon.

Two Models for Every Future AI Procurement Negotiation

This episode created a template that will repeat across every government and enterprise AI procurement for the next decade. Every vendor now faces the same fork:

  1. Anthropic Model: Demand contractual restrictions, accept market exclusion if the buyer refuses

  2. OpenAI Model: Accept broad usage terms, rely on technical controls and vendor discretion to prevent misuse

Business incentives overwhelmingly favor the OpenAI Model. No AI company wants to forfeit a market the size of the U.S. federal government. OpenAI's precedent shows how to thread the needle between ethical concerns and commercial access. Expect most vendors to follow it.

AI ethics enforcement is quietly migrating from contract law to vendor discretion. Your protections depend not on what the agreement says, but on what the vendor decides to enforce through its infrastructure, and on whether commercial incentives continue to align with your risk tolerance. This is the same dynamic playing out in enterprise shadow AI adoption, where 69% of C-suite executives already prioritize speed over policy compliance.

For security teams, this creates a risk category that traditional vendor evaluation frameworks do not address. You are no longer just assessing technical capability and financial stability. You are assessing the durability of a vendor's ethical commitments under commercial pressure, with no contractual mechanism to enforce them.

Indirect Dependency Exposure: The Hidden Supply Chain Risk

The most underappreciated dimension of the Anthropic ban is indirect exposure. If your organization uses Claude directly, the six-month phase-out timeline is obvious. Consider the tools in your security stack that embed AI models without advertising it.

Static analysis tools, code review platforms, threat intelligence services, and SIEM integrations increasingly run AI inference under the hood. Some use Claude. Some use GPT. Many do not disclose which model powers their AI features. A federal procurement ban on Anthropic does not just affect teams that bought Claude directly. It cascades through every product in your stack that calls Anthropic's API. When Claude Code Security capabilities spooked Wall Street into a multi-billion-dollar selloff just weeks earlier, it was a preview of how fast AI vendor risk can propagate through interconnected markets.

The dependency mapping exercise is non-trivial. You need to ask every vendor in your security toolchain: which AI models do you use, where does inference run, and what happens to your product if that model provider loses federal market access? Most vendors will not have a clean answer. That gap itself is a risk signal.

AI Vendor Policy Risk Assessment: Five Questions for Your Next Review

Here are five specific questions designed to surface risks that standard evaluations miss. Copy them into your next AI vendor review.

1. Does your terms of service include a "lawful purposes" clause that could be unilaterally reinterpreted by a government customer?

Why it matters: "All lawful purposes" is the phrase that started this chain of events. If your vendor has already accepted this language in a government contract, their ethical guardrails exist at the discretion of whoever defines "lawful," which can change with an executive order.

2. Are your ethical guardrails implemented as contractual obligations or as vendor policy discretion?

Why it matters: Contractual obligations give you legal standing if the vendor changes course. Policy discretion means the vendor can modify guardrails without breaching any agreement. Ask specifically: if you changed your AI safety policies tomorrow, would that trigger a contract violation with any customer?

3. What percentage of your revenue comes from government contracts, and does any single government customer represent more than 20% of revenue?

Why it matters: Revenue concentration creates pressure to comply with government demands. A vendor deriving 40% of revenue from federal contracts will respond to Pentagon pressure differently than one at 5%. Higher concentration means lower likelihood of pushing back on requests to modify safety controls.

4. Is inference cloud-only, and if so, does the vendor retain unilateral control over safety-layer modifications?

Why it matters: Cloud-only deployment is a double-edged capability. It enables real-time oversight (good), but it also means the vendor can modify the safety stack without your knowledge or consent (bad). Ask whether you would be notified of safety-layer changes and whether you have any contractual right to object. For teams evaluating how to build resilience-first security postures rather than prevention-only approaches, this question is essential.

5. If your AI model provider lost federal market access tomorrow, what is your continuity plan, and how long would the transition take?

Why it matters: This tests whether the vendor has thought about the scenario at all. A vague or nonexistent answer means a single point of failure in their AI supply chain. Anthropic's ban gave the market six months of warning. The next one might not.

What This Means for Security Leaders Going Forward

The Pentagon's "all lawful purposes" demand is not going away. It will become standard language across federal AI procurement, and enterprise buyers in regulated industries will face similar pressure from their own compliance frameworks.

The Anthropic Model versus OpenAI Model split will define how every AI vendor positions itself. Watch for which path your vendors choose. Those that accept broad usage terms and rely on discretionary controls will maintain market access. Those that demand contractual protections will risk exclusion. Neither path eliminates risk for the buyer. One gives you legal recourse. The other gives you practical controls. Your actual exposure lives in the enforcement gap between them.

Anthropic went from publicly traded AI leader to federally banned vendor in 24 hours. Vendor viability in the AI sector can change on political timelines, not business ones. Traditional annual vendor reviews cannot keep pace. If your security stack depends on AI inference from any provider, you need a monitoring cadence that matches the speed at which these decisions are made.

Your next vendor review should include the five questions above. Not because this specific Pentagon episode will repeat in exactly the same form, but because the pattern it established -- ethical positions becoming procurement vulnerabilities overnight -- is now a permanent feature of the AI vendor market. Plan accordingly.