WASHINGTON, D.C. (The Washington Times) — The Defense Department on Thursday designated artificial intelligence company Anthropic a “supply chain risk” to national security — even as the firm’s AI models are being used to support the U.S. in the war against Iran.
The Pentagon hit the San Francisco-based company, maker of the popular Claude AI tool, with the label after the two sides failed to agree on how the military could use the company’s AI models.
The supply chain risk designation, previously reserved for foreign adversaries and associated companies, comes after the company voiced concern that its technology might be used for mass domestic surveillance or developing fully autonomous weapons.
The Pentagon denied planning to use Claude AI for either of those purposes.
Bloomberg first reported the formal designation, but the move came as little surprise. Defense Secretary Pete Hegseth last Friday said the Pentagon intended to take the extraordinary step.
President Trump on Thursday expressed his own frustration with the company because it refused to give the military unlimited access to its technology.
“Well, I fired Anthropic. Anthropic is in trouble because I fired [them] like dogs, because they shouldn’t have done that,” Mr. Trump told Politico.
Anthropic CEO Dario Amodei said in a statement that the company’s “most important priority right now is making sure that our warfighters and national security experts are not deprived of important tools in the middle of major combat operations.”
The Defense Department did not immediately respond to The Washington Times’ request for comment.
Anthropic says its models currently are the only ones approved for use in any classified settings, but rival company OpenAI announced its own deal with the Pentagon last week. Notably, the OpenAI statement on the issue says its deal addresses concerns about domestic surveillance and autonomous weapons.
“We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s,” the company said.
The Pentagon said it intended to wind down Claude AI’s use for military applications over the next six months.
“Anthropic will provide models to the Department of Defense and broader national security community at nominal cost and with continuing support from our engineers, for as long as is necessary to make that transition,” Mr. Amodei said.
The release confirmed the designation put on the company and said it sees “no choice but to challenge it in court.”
The Financial Times reported this week that Anthropic is back in negotiations with the Pentagon, raising the possibility that the “supply chain risk” label may be temporary. That designation means that other companies cannot do business with Anthropic if they want government contracts.
The latest developments come as Anthropic’s models are being used in the planning and support for U.S. military operations in Iran, according to one defense official who spoke on the condition of anonymity due to potential security risks.
The Trump administration’s public actions have built toward this moment over the past week.
Analysts suggest the targeting of Anthropic may present legal issues.
“You can’t just ban a company from doing business unless there’s some reason to do it,” Dan Meyer, the national security law partner at the law firm Tully Rinckey, told The Times. “That’s why they’re reaching for the supply chain arguments. That’s why they’re reaching for the national Defense Production Act arguments — because they can see the debarment case coming.”
The Pentagon had threatened to invoke the Defense Production Act, which would have essentially compelled Anthropic to give the government unlimited access to Claude AI.
Anthropic says it had agreed to the vast majority of the Pentagon’s requested use cases. But the company drew a line in the sand, according to Anthropic CEO Dario Amodei, because laws and regulations have not caught up with the technology.



