A US federal judge has temporarily stopped the Pentagon from designating artificial intelligence firm Anthropic as a “supply chain risk”, a label that could have significantly impacted its business and government contracts.
District Judge Rita Lin issued the order, stating that the government’s action appeared unfair and lacked adequate justification.
She also paused an earlier directive tied to President Donald Trump that had instructed federal agencies to stop using the company’s AI tools, including its chatbot Claude.
The designation of “supply chain risk” is typically used to flag entities considered unreliable or unsafe, often limiting their access to federal contracts. Anthropic argued that the label was imposed abruptly and without clear evidence, harming its reputation and operations.
The dispute stems from disagreements between Anthropic and the US Department of Defense over the use of its AI technology. The company had raised concerns about potential deployment of its tools in sensitive areas such as autonomous weapons and mass surveillance, following which the Pentagon moved to restrict its use.
During the hearing, Judge Lin questioned the use of measures usually applied to foreign threats against a US-based firm. She suggested that the government could have chosen to stop using the company’s products instead of issuing a broad designation.
The court’s order does not require the Pentagon to continue working with Anthropic but prevents it, for now, from formally labeling the company as a risk while the case proceeds. The government has been given time to challenge the ruling in a higher court.
Anthropic welcomed the decision and said it would continue focusing on developing safe and responsible AI systems. The Pentagon has not yet publicly responded.