t>

Anthropic vs. Pentagon: Is it really at risk?


The last two weeks have been described by a struggle Between Anthropic CEO Dario Amodei and Secretary of Defense Pete Hegseth as the two battle for the military’s use of AI.

Anthropic refuses to allow its AI models to be used to target Americans or autonomous weapons that strike without humans. At the same time, Secretary Hegseth also said that the Department of Defense should not be limited by vendor regulations, arguing that the “lawful use” of technology should be allowed.

Thursday, Amodei signed publicly that Anthropic is not backing down – despite the threat that his company could be classified as a supply chain threat. But with the news moving so fast, it’s important to recap what’s at stake in this battle.

At its core, the battle is about who controls powerful AI systems — the companies that build them, or the government that wants to deploy them.

What is Anthropic worried about?

As mentioned above, Anthropic doesn’t want its AI models to be used to monitor Americans or autonomous weapons without people at risk of targeting and shooting. Traditional security contractors often have little say in how their products will be used, but Anthropic has argued since its inception that AI technology poses unique risks and therefore requires special protection. In the company’s view, the question is how to maintain that security when the technology is used by the military.

The US military already relies on autonomous systems, some of which are deadly. The decision to use lethal weapons has long been left to the people, but there are few legal restrictions on the use of autonomous weapons. DoD does not ban all autonomous weapons. According to a 2023 DOD directiveAI systems can select and engage targets without human intervention, as long as they meet certain requirements and are reviewed by security officials.

That’s exactly what makes Anthropic scary. Military technology is secretive in nature, so if the US military is adopting lethal decision-making techniques, we may not know until it goes into action. And if it uses Anthropic colors, it can count as “legal work.”

Techcrunch event

Boston, MA
| |
June 9, 2026

Anthropic nature does not mean that such activities should not be done. It is that his examples cannot help them properly. Imagine an autonomous system failing to identify a target, escalating a conflict without human consent, or making a fatal decision that no one can reverse. Put a little bit of AI into device management, and you get a super-fast, super-reliable machine that’s terrible at making high-end calls.

AI also has the power to increase the validity of American citizens at a touch level. Under current US law, surveillance of American citizens is already possible, whether through letters, emails, and other communications. AI changes the equation by enabling big-screen analytics, web-based grouping, predictive analytics, and continuous behavioral analysis.

What does the Pentagon want?

The Pentagon’s argument is that it should use Anthropic technology for any legitimate use it deems appropriate, rather than being limited by Anthropic’s internal policies for things like autonomous weapons or surveillance.

In particular, Secretary Hegseth has said that the Department of Defense should not be limited by vendor regulations and that it will make “lawful use” of the technology.

Sean Parnell, a senior Pentagon spokesman, said Thursday X post that the department has no interest in mass surveillance at home or in the use of autonomous devices.

“Here’s what we’re asking: Allow the Pentagon to use the Anthropic model for all legitimate purposes,” Parnell said. “This is a simple and clear request that will prevent Anthropic from endangering our military forces and putting our warfighters at risk.

He added that Anthropic has until 5:01 pm ET on Friday to make a decision. “Otherwise, we will terminate our partnership with Anthropic and consider them a threat to DOW,” he said.

While the DoD says it doesn’t believe it should be limited by agency spending policies, Secretary Hegseth’s concerns about Anthropic sometimes appear to be related to cultural concerns. In speeches at the SpaceX and xAI offices in JanuaryHegseth derided “awake AI” in a statement that some saw as a foreshadowing of his feud with Anthropic.

“The War Department AI will not be awakened,” Hegseth said. “We’re building tools and weapons for war, not chatbots for Ivy League social media.”

So what now?

The Pentagon has threatened to declare Anthropic a “material threat” – which prevents Anthropic from doing business with the government – or invoke the Defense Production Act (DPA) to force the company to adapt to the needs of the military. Hegseth has given Anthropic until 5:01 p.m. Friday to respond. But as the deadline approaches, it’s anyone’s guess whether the Pentagon will make good on its threat.

This is not a fight that any party can easily walk away from. Sachin Seth, a VC at Trousdale Ventures who focuses on security technology, says that Anthropic’s alarming signal could mean “lightning” for the company.

However, he said, if Anthropic were to be removed from the DoD, it could become a national security issue.

“(The department) will have to wait six to 12 months for OpenAI or xAI to be implemented,” Seth told TechCrunch. “That leaves a window of up to a year where they could be working not just the best version, but the second or third best.”

xAI is planning to be ready in the group and replace Anthropic, and it is safe to say that the owner is given Articles by Elon Musk In the case that the company would have no problem giving the DoD full control over its technology. Recently reports shows that OpenAI can stick to the same red lines as Anthropic.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *