The War Department filed an appeal on April 2 against a federal judge’s ruling that temporarily blocked the Pentagon from continuing to designate Anthropic as a supply-chain risk.
The designation of the artificial intelligence (AI) company, under a federal law designed to protect military systems from foreign sabotage, functions as a blacklist, preventing it from doing business with the federal government and its contractors.
This is the first time the supply-chain risk designation—which is ordinarily aimed at terrorists, foreign intelligence services, and other hostile actors—has been applied to a U.S. company.
The block allows the company to continue doing business with federal agencies and contractors while the lawsuit plays out in court.
U.S. District Judge Rita F. Lin in San Francisco issued a preliminary injunction on March 26 after holding a hearing two days before.
Lin stayed her order for seven days, allowing the federal government an opportunity to appeal.
The notice of appeal of Lin’s ruling to the U.S. Court of Appeals for the Ninth Circuit was filed by Assistant U.S. Attorney General Brett Shumate in court on April 2. The two-page notice does not provide reasons for the appeal.
President Donald Trump and War Secretary Pete Hegseth previously announced a federal boycott of Anthropic, directing federal agencies, contractors, and suppliers to end ties with the company.
On social media, Trump previously said Anthropic was attempting to “strong-arm” the federal government and officials elected by the American people by dictating its military policy.
The company initiated its lawsuit after Anthropic stated that it declined to change the user policy for its AI product, Claude, to remove safety guardrails preventing its use for mass surveillance and fully autonomous weapons.
The War Department has stated that it has no plans to use Claude for those purposes.
Lin said in her ruling last month that Anthropic has stated that Claude is not ready to be safely used in fully autonomous lethal weapons or for mass surveillance of Americans.
Anthropic says the government must agree not to use its product for such purposes.
At the same time, the department says it should be the sole entity to decide which functions are safe for its AI tools to carry out, not a private company, Lin said.
“This public policy question is not for this court to answer in this litigation. It is the Department of War’s prerogative to decide what AI product it uses,” the judge said.
However, evidence shows that the department is punishing Anthropic for “criticizing the government’s contracting position in the press,” which constitutes “classic illegal First Amendment retaliation,” she said.
The department’s own records indicate that it imposed the designation because of the company’s “hostile manner through the press,” Lin said.
It is unclear when the Ninth Circuit will take up the appeal.














