t>

Will the Pentagon’s Anthropic Controversy Scare Innovators Out of the Defense Industry?


Just over a week later, negotiations over the Pentagon’s use of Anthropic Claude technology ended, the Trump administration said. chose Anthropic risk to supply the chainand the AI ​​company said it would fight the name in court.

OpenAI, meanwhile, was quick to publicize its content, which led to the backlash it saw users are deleting ChatGPT and pushing Anthropic’s Claude to the top of the App Store charts. And One of OpenAI’s executives has resigned over concerns that the announcement was made hastily without proper warning.

On the latest episode of TechCrunch’s Equity podcastKirsten Korosec, Sean O’Kane, and I discussed what this means for other startups looking to work with the federal government, especially the Pentagon, as Kirsten wondered, “Are we going to see a little bit of music change?”

Sean noted that this is problematic in several ways, in part because OpenAI and Claude create things that “no one can block.” And most interestingly, this is a debate on “how their technologies are being used or not being used to kill people” so that it is naturally very scrutinized.

However, Kirsten argued, this is a situation that needs to be “started first.”

Read a preview of our interview, edited for length and clarity, below.

Kirsten: I wonder if some startups are starting to look at what has happened with the federal government, especially the Pentagon and Anthropic, that the conflict is a match, and (take) first if they want to pursue federal dollars. Will we see a slight change in music?

Techcrunch event

San Francisco, CA
| |
October 13-15, 2026

Sean: I wonder about that too. I think no, to some extent, recently, if because when you try to think about all the different companies, whether they are startups or the most established Fortune 500s that work with the government and especially with the Department of Defense or the Pentagon, (for) many of them, the work flies under the radar.

General Motors makes military defense vehicles and has done (this) for a very long time and has worked on both electric vehicles and autonomous models. There are things like this that happen all the time and don’t affect the zeitgeist. I think the problem that OpenAI and Anthropic faced during the last week is the same, these are companies that create products that many people use – and most importantly, (that) no one can shut them down.

So there’s that kind of look on them, which naturally highlights their involvement to a degree that I think a lot of other companies that do business with the federal government — and, in fact, any of the federal government’s war-fighting stuff — don’t have to deal with.

The only caveat I would add is the great heat surrounding these discussions between Anthropic and OpenAI and the Pentagon specifically about how their technologies are being used or not being used to kill people, or other parts of the mission that are killing people. Not only do they focus on them and what we know is their brand, there’s another thing that I feel is not clear when you think of General Motors as a defense contractor or anything else.

I don’t think we’re going to see, like, Applied Intuition or all these other companies that have been pitching themselves to dual use, because I don’t see the shape and there’s no kind of understanding that’s going to happen.

Anthony: This story is unique and unique to these companies and personalities in many ways. I mean, there’s been a lot very interesting news About: What is the role of technology in government? (Of) AI in government? And I think these are all good questions and worth asking and researching.

I also think, however, that this is a very interesting lens through which to look at some of those things because Anthropic and OpenAI aren’t really that different in many ways or how they’re doing it. I am no like one company saying, “Hey, I don’t want to work with the government” and another saying, “Yes, I do.” Or someone says, “You can do whatever you want.” and (the other says): “No, I want to have restrictions”. Both, at least publicly, say, “We want restrictions on how our AI is used.” It looks like Anthropic is digging in their heels a lot about: You can’t change words like this.

And on top of that, there also seems to be a personality layer where, Anthropic’s CEO is, Emil Michael – which many TechCrunch readers can relate to. remember from his Uber daysand now (chief technology officer of the Department of Defense). It seems that they don’t really love each other. He says.

Sean: Yes, there is a huge “girls fighting” thing here that we shouldn’t ignore.

Kirsten: Yeah, just a little bit. There is, but the effect is a little stronger than that. Again, to go back a little bit, what we are talking about here is the Pentagon and Anthropic coming in a debate that Anthropic seems to have lost, although I must say that it is being used more by the military. It is considered an important technology, but OpenAI has stepped in, and this is changing and will probably change by this stage.

The shooting has been interesting for OpenAI, which we have seen a lot of ChatGPT removal I think is up 295% After OpenAI closed in cooperation with the Department of Defense.

To me, all of this is a noise to something more difficult and dangerous, which is that the Pentagon wants to change the wording of the existing agreement. And this is very important and it should stop at the beginning because the political machinery that is going on right now, especially with the DoD, seems to be different. This is not unusual. Contractors always take time to enter the government and the fact that they want to change this term is difficult.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *