AI

Pentagon moves to designate Anthropic as a supply-chain risk

In a post on Truth Social, President Trump ordered federal agencies to cease use of all Anthropic products following the company’s public dispute with the Department of Defense. The president allowed a six-month phase-out period for departments using the products, but emphasized that Anthropic was no longer welcome as a federal contractor.

“We don’t need it, we don’t want it, and we won’t be doing business with them again,” the president wrote in the post.

Notably, the president’s post made no mention of plans to designate Anthropic as a supply chain risk, as previously mentioned as a result. However, another tweet from Defense Secretary Pete Hegseth made good on the threat.

“In conjunction with the President’s directive to the Federal Government to cease all use of Anthropic’s technology, I am directing the Department of War to designate Anthropic as a national security risk in the supply chain,” Secretary Hegseth wrote. “Effective immediately, no contractor, supplier or partner doing business with the U.S. military may engage in any commercial activity with Anthropic.”

The Pentagon dispute centered on Anthropic’s refusal to allow its AI models to be used for mass domestic surveillance or for fully autonomous weapons, which Secretary Hegseth found overly restrictive.

CEO Dario Amodei reiterated his position in a public post on Thursdayand refused to compromise on these two points.

“Our strong preference is to continue serving the ministry and our fighters – with the two requested security measures,” Amodei wrote at the time. “Should the Department choose to retire Anthropic, we will work to ensure a smooth transition to another supplier while avoiding any disruption to ongoing military planning, operations or other critical missions.”

OpenAI supports Anthropic’s decision. According to the BBCCEO Sam Altman sent a memo to staff on Thursday saying he shared the same “red lines” and that all OpenAI-related defense contracts would also reject uses that were “unlawful or inappropriate for cloud deployments, such as domestic surveillance and autonomous strike weapons.”

OpenAI co-founder Ilya Sutskever, who had a very public falling out with Altman in November 2023 and has since co-founded his own AI company, also joined the conversation on Friday: write on X: “It’s extremely good that Anthropic hasn’t backed down, and it’s significant that OpenAI has taken a similar stance.

In the future, there will be many more challenging situations of this nature, and it will be crucial for the relevant leaders to tackle this situation, and for fierce competitors to put aside their differences. It’s good to see that happening today.”

Anthropic, OpenAI and Google each received an award contractual awards from the US Department of Defense last July. While some Google employees have spoken out in support of Anthropic, Google and its parent company have not yet commented.

Source link

Back to top button