OpenAI and Google employees rush to Anthropic’s defense in DOD lawsuit

More than 30 OpenAI and Google DeepMind employees filed a declaration Monday in support of Anthropic’s lawsuit against the U.S. Department of Defense after the federal agency labeled the AI company a supply chain risk, court documents show.
“The government’s designation of Anthropic as a supply chain risk was an inappropriate and arbitrary use of power that has serious consequences for our industry,” reads the letter, which was signed by Google DeepMind chief scientist Jeff Dean, among others.
Late last week, the Pentagon labeled Anthropic a supply chain risk — usually reserved for foreign adversaries — after the AI company declined to allow the Department of Defense (DOD) to use its technology for mass surveillance of Americans or to fire weapons autonomously. The Defense Department had argued that it should be able to use AI for any “legal” purpose and not be restricted by a private contractor.
The amicus brief in support of Anthropic appeared on the docket a few hours after the Claude creator filed two lawsuits against the DOD and other federal agencies. Wired was the first to break the news.
In the lawsuit, Google and OpenAI officials make the point that if the Pentagon was “no longer satisfied with the agreed upon terms of its contract with Anthropic,” the agency “could have simply terminated the contract and purchased the services of another leading AI company.”
Indeed, the DOD signed a deal with OpenAI within moments of Anthropic becoming a supply chain risk — a move that many employees of the ChatGPT maker protested.
“If we proceed, this attempt to punish one of America’s leading AI companies will undoubtedly impact the United States’ industrial and scientific competitiveness in artificial intelligence and beyond,” the letter reads. “And it will cool the open discussion in our field about the risks and benefits of current AI systems.”
WAN event
San Francisco, CA
|
October 13-15, 2026
The filing also confirms that the red lines indicated by Anthropic are legitimate concerns that warrant strong guardrails. Without public law governing the use of AI, the contractual and technical limitations that developers place on their systems are a crucial safeguard against catastrophic abuse.
Many of the employees who signed the statement have also signed open letters in recent weeks urging the DOD to rescind the label and calling on their companies’ leaders to support Anthropic and reject unilateral use of their AI systems.




