AI

Will the Pentagon’s Anthropic controversy scare startups away from defense work?

In just over a week, negotiations over the Pentagon’s use of Anthropic’s Claude technology collapsed, the Trump administration labeled Anthropic a supply chain risk, and the AI ​​company said it would challenge that designation in court.

OpenAI, meanwhile, quickly announced its own deal, sparking a backlash that saw users uninstall ChatGPT and push Anthropic’s Claude to the top of the App Store charts. And at least one OpenAI executive has quit over concerns that the announcement was rushed without appropriate guardrails in place.

On the latest episode of TechCrunch’s Equity podcast, Kirsten Korosec, Sean O’Kane and I discussed what this means for other startups looking to work with the federal government, especially the Pentagon, with Kirsten wondering, “Are we going to see a change in tone?”

Sean pointed out that this is an unusual situation in some ways, in part because OpenAI and Claude are making products that “no one can shut up about.” And crucially, this is a dispute over “how their technologies are or are not being used to kill people,” so it will obviously attract more attention.

Still, Kirsten argued, this is a situation that “should give every start-up pause.”

Read a preview of our conversation below, edited for length and clarity.

Kirsten: I wonder if other startups are starting to look at what happened with the federal government, especially the Pentagon and Anthropic, that debate and wrestling match, and [take] think for a moment about whether they want to go after federal dollars. Are we going to see a slight change in tone?

Sean: I wonder that too. I don’t think so, to some extent, in the short term, if only because if you really try to think about all the different companies, whether they’re startups or even more established Fortune 500s that are working with the government and particularly with the Department of Defense or the Pentagon, [for] For many of them, this work remains under the radar.

General Motors makes and has made defense vehicles for the military [that] and has worked on all electric versions of those vehicles and autonomous versions. Stuff like that happens all the time and it just never really hits the zeitgeist. I think the problem that OpenAI and Anthropic ran into this past week is: these are companies that make products that a lot of people use – and more importantly: [that] no one can keep quiet about it.

So there’s such a big spotlight on them, which obviously highlights their involvement at a level that I don’t think most other companies that have contracts with the federal government – and in particular all the war elements of the federal government – necessarily have to deal with.

The one caveat I would add is that a lot of the discussion surrounding this discussion between Anthropic and OpenAI and the Pentagon is very specifically about how their technologies are or are not being used to kill people, or in parts of the missions that kill people. It’s not just the attention that’s focused on them and the familiarity that we have with their brands, there’s an additional element that I think is more abstract when you think of General Motors as a defense contractor or whatever.

See also  Pat Gelsinger wants to save Moore's Law, with a little help from the Feds

I don’t think we’ll see Applied Intuition or any of those other companies that have labeled themselves as dual-use pull back much, just because I don’t see the spotlight on it and there just isn’t the kind of shared understanding of what that impact could be.

Anthony: This story is in many ways so unique and specific to these companies and personalities. I mean, there’s been a lot of interesting thinking about, what is the role of technology in government? [Of] AI in government? And I think these are all good and valuable questions to ask and explore.

However, I also think this is a very curious lens through which to examine some of these things, because Anthropic and OpenAI are actually not that different in many ways or in the positions they take. Are not like one company says, “Hey, I don’t want to work with the government” and another says, “Yes, I do.” Or someone says, “You can do whatever you want.” And [the other is] saying, “No, I want to have limitations.” Both say, at least publicly, “We want restrictions on how our AI is used.” It seems Anthropic is much more concerned with: You can’t change the terms this way.

And on top of that, there also just seems to be a layer of personality that Anthropic CEO and Emil Michael – who many TechCrunch readers may remember from his Uber days, and now [chief technology officer for the Department of Defense]. Apparently they just don’t like each other. Allegedly.

Sean: Yes, there’s a very big “girl fighting” element to this that we shouldn’t overlook.

See also  Anthropic scientists hacked Claude’s brain — and it noticed. Here’s why that’s huge

Kirsten: Yes, a little bit. That’s true, but the implications are a little stronger than that. Again, to back up a bit, what we’re talking about here is the Pentagon and Anthropic getting into a dispute in which Anthropic appears to have lost, although I should say they are still widely used by the military. They’re considered a crucial technology, but OpenAI has stepped in a bit, and this is evolving and will likely change by the time this episode comes out.

The backlash has been interesting for OpenAI, where we’ve seen a lot of ChatGPT removals. I think these increased by 295% after OpenAI struck a deal with the Department of Defense.

To me, this is all noise before the really crucial and dangerous point, which is that the Pentagon tried to change the existing terms of an existing contract. And that’s very important and should give any startup pause because the political machine that’s happening now, especially in the Department of Defense, seems to be different. This is not normal. Contracts take forever to become established at the government level and the fact that they are trying to change those terms is a problem.

Source link

Back to top button