No one has a good plan for how AI companies should work with the government

As Sam Altman discovered Saturday night, it’s a tough time to do work for the U.S. government. The CEO of OpenAI around 7 p.m announced he would answer questions publicly on X, as a way to demystify his company’s decision to take over the Pentagon contract that Anthropic had just walked away from.
Most of the questions boiled down to OpenAI’s willingness to engage in mass surveillance and automated killings – the very activities Anthropic had ruled out in its negotiations with the Pentagon. Altman typically focused on the public sector, saying it was not his role to set national policy.
“I believe deeply in the democratic process,” he wrote in one response, “and that our elected leaders have the power and that we all must uphold the Constitution.”
An hour later, he admitted that he was surprised that so many people seemed to disagree. “There’s a more open debate than I thought,” Altman said, “about whether we should favor a democratically elected government or unelected private companies to have more power. I think this is something people disagree on.”
It’s a telling moment for both OpenAI and the tech industry as a whole. In his question-and-answer, Altman took a position common in the defense industry, where military leaders and industry partners are expected to defer to civilian leadership.
But what’s more telling is that as OpenAI transitions from a wildly successful consumer startup to part of the nation’s security infrastructure, the company appears ill-equipped to manage its new responsibilities.
Altman’s public town hall came at an important time for his company. The Pentagon had just blacklisted OpenAI rival Anthropic for pushing for contractual restrictions on surveillance and automated weapons. Hours later, OpenAI announced it had won the same contract that Anthropic gave up. Altman portrayed the deal as a quick way to de-escalate the conflict – and it was certainly a lucrative one. But he seemed unprepared for the massive backlash it caused among both users and company employees.
WAN event
San Francisco, CA
|
October 13-15, 2026
OpenAI has been working with the US government for years, but not like this. When Altman made his case before the Congressional committees in 2023So he still mainly followed the social media playbook. He was bombastic about the company’s world-changing potential, while acknowledging the risks and engaging enthusiastically with lawmakers – a perfect combination to encourage investors while avoiding regulation.
Less than three years later, that approach is no longer sustainable. AI is so clearly powerful and the capital needs are so intense that it is impossible to avoid a more serious involvement in government. The surprise is how unprepared both sides seem to be for this.
The biggest immediate conflict is Anthropic itself, and U.S. Defense Secretary Pete Hegseth’s plan to designate the lab as a supply chain risk on Friday. That threat hangs over the entire conversation like an unfired gun. Like former Trump official Dean Ball wrote this weekendwould cut off the Anthropic name from hardware and hosting partners, effectively destroying the company. It would be an unprecedented move against an American company, and it could be ultimately be reversed in courtin the meantime, it will wreak havoc and send shockwaves through the industry.
As Ball describes the process, Anthropic executed an existing contract under terms established years earlier – only to have the government insist on changing the terms. It goes far beyond anything that would fly between private companies and send a chilling message to other suppliers.
“Even if Secretary Hegseth backs down and downplays his extremely broad threat against Anthropic, major damage has been done,” Ball wrote. “Most corporations, political actors and others will have to operate under the assumption that tribal logic will now rule.”
It’s a direct threat to Anthropic, but also a serious problem for OpenAI. The company is already under intense pressure from employees to maintain the appearance of a red line. At the same time, right-wing media will be alert to any sign that OpenAI is a less-than-loyal political ally. In the middle of everything is the Trump administration, doing its best to make the situation as difficult as possible.
You could argue that OpenAI didn’t set out to become a defense contractor, but its huge ambitions have forced it to play the same game as Palantir and Anduril. Making a breakthrough during the Trump administration means choosing sides. There are no apolitical actors here, and winning some friends means alienating others. It remains to be seen how high the price OpenAI will pay, either in lost business or lost employees, but it is unlikely to emerge from the market unscathed.
It may seem strange that this crackdown is happening at a time when there are more prominent tech investors holding positions of influence in Washington than ever, but most of them seem perfectly content with the tribal logic. Among Trump-aligned venture capitalists, Anthropic has long been seen as currying favor with the Biden administration in ways that would hurt the larger industry — a perception underscored by Trump’s adviser. David Sacks’ response to the ongoing conflict. Now that the reverse has happened, few seem willing to stand up for the broader principle of free enterprise.
This is a difficult position for any company – and while politically aligned players may benefit in the short term, they will be just as vulnerable when the political winds inevitably change. There’s a reason why the defense sector was dominated for decades by slow-moving, heavily regulated conglomerates like Raytheon and Lockheed Martin. Operating as an industrial wing of the Pentagon gave them the political cover they needed to avoid politics, allowing them to stay focused on the technology without having to hit reset every time the White House changed hands.
Today’s startup competitors may be moving faster than their predecessors, but they are far less prepared for the long term.




