Anthropic sticks to Claude guardrails despite Pentagon pressure

Anthropic sticks to Claude guardrails despite Pentagon pressure

A confrontation between Anthropic and the US Department of Defense has quickly escalated from a contractual disagreement to a public and politically charged conflict. 

The stakes are high, according to reports by Reuters and the BBC. At issue is a defense contract worth up to $200 million and the question of the extent to which commercial AI suppliers can set limits on the military use of their technology.

The reason for the conflict is Anthropic’s refusal to remove safety mechanisms from its AI models. These safeguards are designed to prevent the technology from being used for fully autonomous weapons systems or large-scale domestic surveillance. According to the Pentagon, this hinders the use of the models for legitimate purposes within the Department of Defense.

Pentagon demands full access to AI models

Pentagon spokesman Sean Parnell stated earlier today via X that the Department of Defense has no intention of using AI for mass surveillance of Americans or for weapons that operate without human involvement. At the same time, he made it clear that the department demands full access to Anthropic’s models for all legally permitted applications. According to Parnell, the company had until exactly 5:01 a.m. Eastern Time on Friday to agree to this. If not, the Department of Defense would terminate the collaboration and designate Anthropic as a risk to the defense chain.

Anthropic CEO Dario Amodei responded that his company cannot agree to that request, regardless of the pressure exerted. He emphasized that the objections do not stem from mistrust of the Pentagon, but from an assessment of product safety. According to Amodei, so-called frontier AI systems are simply not yet reliable enough to make independent decisions in situations where human lives are at stake.

A source within the company explained that AI models can exhibit unpredictable behavior in unknown or new scenarios. In a military context, this can lead to serious mistakes, such as hitting friendly units, failed operations, or unintended escalation. Anthropic does not consider this risk defensible at this time.

Legal boundaries for AI surveillance unclear

The company also sees the use of AI for domestic surveillance as problematic. The same source pointed out that existing legislation sets few limits on the conclusions that AI can draw from combining large amounts of data. This can lead to population-wide profiles that are not explicitly prohibited, but which Anthropic believes are contrary to the spirit of constitutional protection of citizens.

Amodei said he hoped the Ministry of Defense would reconsider its position, but at the same time stated that Anthropic was preparing for a possible termination of the contract. In that scenario, he said, the company would cooperate in an orderly transfer to another supplier. He added that the Department of Defense is not only threatening to exclude its systems, but also to label Anthropic as a supply chain risk and even to invoke the Defense Production Act to enforce the removal of security measures.

According to Amodei, these threats do not change the company’s position. He stated that Anthropic is not willing to abandon its principles, even under legal or financial pressure.

Pentagon responds fiercely to criticism from Anthropic

The Pentagon’s response then took on a more personal tone. Deputy Secretary of Defense Emil Michael accused Amodei via X of distorting the truth and placing himself above the military. According to Michael, the CEO is trying to impose his personal views on the US armed forces, even if he believes this undermines national security. He emphasized that the Department of Defense will always comply with the law, but will not be dictated to by the preferences of a commercial technology company.

Anthropic is backed by investors such as Google and Amazon. A spokesperson for the company said that Anthropic remains open to dialogue and is committed to operational continuity for the Department of Defense and military personnel. At the same time, the company is receiving support from the broader AI sector. More than 200 employees of Google and OpenAI signed an open letter endorsing Anthropic’s position and warning against the rapid militarization of advanced AI.

Google and OpenAI themselves did not respond substantively to questions on this matter. This leaves the conflict unresolved for the time being, but sharply defined. It is not just about one contract, but about the question of who ultimately determines the limits of AI in military and domestic applications.