AI

New court filing reveals Pentagon told Anthropic the two sides were nearly aligned — a week after Trump declared the relationship kaput

Anthropic filed two affidavits in a California federal court Friday afternoon, pushing back on the Pentagon’s claim that the AI ​​company poses an “unacceptable risk to national security” and saying the government’s case rests on technical misunderstandings and claims that were never actually raised during the months of negotiations leading up to the dispute.

The statements were filed alongside Anthropic’s response brief in the lawsuit against the Department of Defense and come ahead of a hearing next Tuesday, March 24, before Judge Rita Lin in San Francisco.

The dispute dates back to late February, when President Trump and Defense Secretary Pete Hegseth publicly stated they were cutting ties with Anthropic after the company refused to allow unrestricted military use of its AI technology.

The two people who filed the affidavits are Sarah Heck, Anthropic’s head of policy, and Thiyagu Ramasamy, the company’s head of public sector.

Heck is a former National Security Council official who worked in the White House under the Obama administration before moving to Stripe and then to Anthropic, where she leads the company’s government relations and policy work. She was personally present at the Feb. 24 meeting where CEO Dario Amodei sat down with Defense Secretary Hegseth and Pentagon Assistant Secretary Emil Michael.

In her declarationHeck cites what she describes as a central lie in the government’s files: that Anthropic demanded some kind of approval role in military operations. According to her, that claim is simply not true. “At no time during Anthropic’s negotiations with the Department did I or any other Anthropic employee say that the company wanted such a role,” she wrote.

See also  Chrissy Teigen reveals facial surgery during Instagram Live Stream

She also claims that Pentagon concerns about potentially disabling or altering Anthropic’s technology during the operation were never raised during negotiations. Instead, she says, it first appeared in the government’s court filings, giving Anthropic no opportunity to respond.

WAN event

San Francisco, CA
|
October 13-15, 2026

Another detail in Heck’s statement that is sure to turn heads is that on March 4 — the day after the Pentagon formally finalized its supply chain risk designation against Anthropic — Secretary of State Michael emailed Amodei to say the two sides were “very close” on the two issues the administration now cites as evidence that Anthropic is a national security threat: its positions on autonomous weapons and mass surveillance of Americans.

The email, which Heck includes as evidence in her statement, is worth reading in addition to what Michael said publicly in the days that followed. On March 5, Amodei published a statement saying that the company “productive conversations“with the Pentagon. The day after, Michael posted on X that “there are no active Department of War negotiations with Anthropic.” A week after that, he told CNBC there was “no chance” of renewed talks.

Heck’s point seems to be: If Anthropic’s stance on those two issues makes it a national security threat, why did the Pentagon official say the two sides were in close agreement on those exact issues right after the designation was made final? (She no longer says the government used the designation as a bargaining chip, but the timeline she outlines leaves the question hanging.)

See also  OpenAI is reportedly asking contractors to upload real work from past jobs

Ramasamy brings a different kind of expertise to the business. Before joining Anthropic in 2025, he spent six years at Amazon Web Services managing AI implementations for government clients, including classified environments. At Anthropic, he is credited with building the team that brought his Claude models into national security and defense environments, including the Contract worth $200 million with the Pentagon announced last summer.

Are declaration takes up the government’s claim that Anthropic could theoretically disrupt military operations by disabling the technology or otherwise altering its behavior, which Ramasamy says is not technically possible. By his account, once Claude is deployed into a government-secure, air-gapped system operated by a third-party contractor, Anthropic has no access to it; there’s no remote kill switch, no backdoor, and no mechanism to push unauthorized updates. Any form of “operational veto” is a fiction, he suggests, explaining that a change in the model would require explicit approval and action from the Pentagon.

Anthropic, he says, can’t even see what government users are typing into the system, let alone extract that data.

Ramasamy also disputes the government’s claim that Anthropic’s hiring of foreigners makes the company a security risk. He notes that Anthropic employees have undergone a U.S. government security clearance — the same background check required for access to classified information — and adds in his statement that Anthropic is “to my knowledge” the only AI company where authorized personnel actually built the AI ​​models designed to run in classified environments.

Anthropic’s lawsuit alleges that the supply chain risk designation — the first ever applied to a U.S. company — amounts to government retaliation for the company’s publicly expressed views on AI safety, in violation of the First Amendment.

See also  Harnessing Generative AI for Test Automation and Reporting

The government, in a 40-page filing earlier this week, rejected this framing entirely, saying that Anthropic’s refusal to allow all lawful military uses of its technology was a business decision, not protected speech, and that the designation was a clear call for national security and not punishment for the company’s positions.

Source link

Back to top button