Anthropic vs. the Pentagon: What’s actually at stake?

The past two weeks have been marked by a clash between Anthropic CEO Dario Amodei and Defense Secretary Pete Hegseth, as the two battle over the military’s use of AI.
Anthropic refuses to allow its AI models to be used for mass surveillance of Americans or for fully autonomous weapons that carry out attacks without human input. At the same time, Minister Hegseth has argued that the Defense Department should not be limited by a supplier’s rules, and that any “legal use” of the technology should be allowed.
On Thursday, Amodei made it publicly clear that Anthropic is not backing down, despite threats that its company could be identified as a supply chain risk as a result. But with the news cycle moving quickly, it’s worth reexamining what exactly is at stake in the battle.
At its core, this battle is about who controls powerful AI systems: the companies that build them, or the government that wants to deploy them.
What is Anthropic concerned about?
As we said above, Anthropic doesn’t want its AI models to be used for mass surveillance of Americans or for autonomous weapons without humans involved in the targeting and firing decisions. Traditional defense companies typically have little say in how their products will be used, but Anthropic has argued from the beginning that AI technology poses unique risks and therefore requires unique safeguards. From the company’s perspective, the question is how to enforce these safeguards when the technology is used by the military.
The US military already relies on highly automated systems, some of which are deadly. The decision to use lethal force has historically been left to humans, but there are few legal restrictions on the military use of autonomous weapons. The Department of Defense does not categorically ban fully autonomous weapon systems. According to one 2023 DOD DirectiveAI systems can select and attack targets without human intervention, as long as they meet certain standards and are reviewed by senior defense officials.
That’s exactly what makes Anthropic nervous. Military technology is secretive by nature, so if the U.S. military were to take steps to automate lethal decision-making, we might not know about it until it was operational. And if it used Anthropic’s models, it could count as “legal use.”
WAN event
Boston, MA
|
June 9, 2026
Anthropic’s position is not that such applications should be definitively off the table. It is that the models are not yet capable enough to support them safely. Imagine an autonomous system that misidentifies a target, escalates a conflict without human consent, or makes a lethal decision that can be reversed in a split second. Put a less capable AI in charge of the weapons, and you get a very fast, very confident machine that’s bad at making high-stakes decisions.
AI also has the power to enhance legal surveillance of American citizens to a worrying degree. Under current US laws, surveillance of US citizens is already possible, either through the collection of text messages, emails and other communications. AI changes the equation by enabling automated large-scale pattern detection, entity resolution across data sets, predictive risk scoring, and continuous behavioral analysis.
What does the Pentagon want?
The Pentagon’s argument is that it should be able to deploy Anthropic’s technology for any lawful use it sees fit, rather than being limited by Anthropic’s internal policies on things like autonomous weapons or surveillance.
More specifically, Secretary Hegseth has argued that the Department of Defense should not be limited by a supplier’s rules and that it would engage in “legal use” of the technology.
Sean Parnell, the Pentagon’s chief spokesman, said in a Thursday X-post that the department has no interest in carrying out large-scale domestic surveillance or deploying autonomous weapons.
“Here’s what we’re asking: Allow the Pentagon to use Anthropic’s model for all lawful purposes,” Parnell said. “This is a simple, common sense request that will prevent Anthropic from jeopardizing critical military operations and potentially endangering our warfighters. We will NOT let ANY company dictate the terms regarding how we make operational decisions.”
He added that Anthropic has until 5:01 PM ET on Friday to decide. “Otherwise, we will terminate our partnership with Anthropic and consider it a supply chain risk for DOW,” he said.
Despite the Department of Defense’s position that it simply does not believe it should be limited by a company’s usage policy, Secretary Hegseth’s concerns about Anthropic sometimes appeared to be tied to cultural grievances. In a speech at the offices of SpaceX and xAI in JanuaryHegseth denounced “woke AI” in a speech that some saw as a preview of his feud with Anthropic.
“The War Department AI will not be awakened,” Hegseth said. “We’re building war-ready weapons and systems, not chatbots for an Ivy League faculty lounge.”
So what now?
The Pentagon has threatened to designate Anthropic a “supply chain risk” — essentially blacklisting Anthropic from doing business with the government — or invoke the Defense Production Act (DPA) to force the company to tailor its model to the needs of the military. Hegseth has given Anthropic until Friday at 5:01 p.m. to respond. But as the deadline approaches, it is anyone’s guess whether the Pentagon will make good on its threat.
This is not a fight that either side can easily walk away from. Sachin Seth, a VC at Trousdale Ventures who focuses on defense technology, says a supply chain risk label for Anthropic could mean “lights out” for the company.
However, he said that if Anthropic is dropped from the DoD, it could be a national security issue.
“[The Department] “We would have to wait six to 12 months for OpenAI or xAI to catch up,” Seth told TechCrunch. “There remains a period of up to a year where they may not be working with the best model, but with the second or third best.”
xAI is gearing up to be classified and replace Anthropic, and it’s fair to say it has some ownership Elon Musk’s rhetoric on the issue that the company would have no problem giving the DoD full control over its technology. Recently reports indicate that OpenAI may use the same red lines as Anthropic.




