Thursday, March 5, 2026
SanFrancisco.news

Latest news from San Francisco

Story of the Day

Pentagon warns Anthropic it may lose $200 million deal unless Claude safety safeguards are reduced

AuthorEditorial Team
Published
February 27, 2026/05:00 PM
Section
Business
Pentagon warns Anthropic it may lose $200 million deal unless Claude safety safeguards are reduced
Source: Wikimedia Commons / Author: Master Sgt. Ken Hammond, U.S. Air Force

A contract dispute tests how far military agencies can push frontier AI vendors on safety controls

The U.S. Department of Defense is weighing whether to cancel a contract valued at up to $200 million with San Francisco-based AI company Anthropic after a disagreement over safety restrictions in the company’s Claude models. The dispute centers on whether the Pentagon can require broader latitude to use the system for “any lawful” purpose, including in sensitive environments, without vendor-imposed guardrails that restrict certain use cases.

The contract at issue is a two-year prototype agreement administered through the Defense Department’s Chief Digital and Artificial Intelligence Office. Announced in July 2025, it established a ceiling of $200 million for Anthropic to build and test “frontier” AI capabilities for defense operations, including prototypes tailored to Department data and operational needs.

Anthropic has said it will not remove safety precautions and has framed the dispute as a boundary over how its models can be used in military and domestic contexts.

In late February 2026, senior defense officials communicated an ultimatum tied to the company’s willingness to relax safeguards. Alongside potential termination, Pentagon officials have discussed labeling Anthropic a “supply chain risk,” a step that could affect the company beyond a single contract by complicating its eligibility in federal procurement ecosystems and pressuring other defense contractors to avoid dependencies on Anthropic technology.

Defense officials have argued the Pentagon is seeking authority only for lawful uses, while Anthropic has maintained that “lawful” is not sufficiently specific to prevent applications the company considers unacceptable or prone to abuse. Public statements from Anthropic leadership have pointed to concerns about domestic mass surveillance and autonomous weaponization as areas where the company is unwilling to grant unfettered access or remove constraints.

Why the standoff matters for classified AI adoption

The confrontation is unfolding as the Defense Department expands a portfolio of AI relationships with multiple major vendors. Anthropic’s tools have been positioned for use in sensitive and classified contexts, increasing the operational stakes of any abrupt change. A contract cancellation would require migration planning, continuity measures, and possible revalidation of alternative systems for comparable security and performance requirements.

  • For the Pentagon: It raises procurement and governance questions about whether model providers can impose usage limits once systems are embedded in mission workflows.

  • For Anthropic and peers: It tests whether refusing certain categories of defense use can trigger penalties extending beyond the immediate contract.

  • For the broader AI market: It highlights a growing tension between standardized government contract language and vendor safety frameworks.

What comes next

At the center of the dispute is whether the parties can reconcile contract terms that satisfy defense operational requirements while preserving vendor-defined safeguards. Anthropic has indicated it is continuing discussions while also preparing for the possibility of a transition if the Department ends the agreement. The Pentagon, meanwhile, has signaled it is prepared to escalate administrative and contractual measures if no resolution is reached by the stated deadline.