The US Department of Defense has officially designated AI company Anthropic as a risk to the US supply chain. This escalates a conflict between the Pentagon and the developer of the Claude AI model over the conditions under which the technology may be used by the military.
Bloomberg reports on the controversy. According to a senior defense official, the decision takes effect immediately. At the same time, Anthropic’s software is still being used in ongoing military operations, including US activities around Iran. The Department of Defense had previously indicated that there would be a six-month transition period to potentially switch to other suppliers.
The conflict arose during negotiations for a new contract between the Pentagon and Anthropic. The Department of Defense attempted to establish clear agreements on access to and use of the AI technology. However, the talks stalled when Anthropic demanded guarantees that its models would not be used for mass surveillance of American citizens or for the deployment of autonomous weapons systems.
The Department of Defense considers such restrictions unacceptable. According to an official involved, the military must be able to use technology for all legally permitted purposes. According to the Pentagon, when a supplier attempts to restrict the use of critical systems, it can jeopardize operational safety and the deployment of military personnel.
Remarkable classification for American AI company
Anthropic previously announced its intention to legally challenge any designation as a supply chain risk. Such a classification is typically applied to companies or technologies linked to geopolitical adversaries of the United States. It is therefore remarkable that an American AI company has been designated in this way.
The consequences of the decision could be significant for both the Pentagon and Anthropic. Until recently, the US defense organization made intensive use of the company’s technology. The Claude models were one of the few AI systems running in the Pentagon’s classified cloud environment. The specific government variant, Claude Gov, was valued within the defense department for its relative ease of use.
Anthropic also plays a role within existing military systems. Claude is integrated as one of the language models in Palantir’s Maven Smart System, an analysis platform widely used by US troops in the Middle East. The system helps military personnel to quickly analyze large amounts of operational data. According to those involved, the technology works well and has become an important part of AI applications within US operations. The tensions with the Pentagon could affect the company’s future position, especially when it comes to government contracts.
It is still unclear on what legal authority the Pentagon has officially classified the company as a supply chain risk. Anthropic expects that the Department of Defense will ultimately use a specific provision from US defense legislation for this purpose. If this happens, a lawsuit between the AI company and the US Department of Defense seems almost inevitable.