Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Anthropic filed two affidavits in federal court in California on Friday afternoon, pushing back against the Pentagon’s claim that the AI company poses an “unacceptable threat to national security” and that the government’s case relies on technical misunderstandings and allegations that did not begin in the months of negotiations that led to the dispute.
The announcements were made along with Anthropic’s brief responses to its lawsuit against the Department of Defense and ahead of a Tuesday, March 24, hearing before Judge Rita Lin in San Francisco.
The dispute began in late February, when President Trump and Defense Secretary Pete Hegseth publicly announced they were cutting ties with Anthropic after the company refused to allow unrestricted use of its military technology for AI technology.
The two people who provided the information were Sarah Heck, Director of Anthropic Policy, and Thiyagu Ramasamy, Director of the Company’s Financial Group.
Heck is a former director of the National Security Council who worked in the White House under Obama before moving to Stripe and then Anthropic, where he ran government relations and legal services. He was present at the February 24 meeting where CEO Dario Amodei sat down with Secretary of Defense Hegseth and Under Secretary of the Pentagon Emil Michael.
In his statement, Heck describes what he describes as a major lie in government files: that Anthropic wanted to be approved for military operations. What he said is not true. “At no time during Anthropic’s discussions with the department did I or any other Anthropic employee suggest that the company wanted such a position,” he wrote.
He added that the Pentagon’s concern about Anthropic being able to disable or change its technology mid-operation did not come up in the discussion. Instead, he says, it first appeared in federal court, giving Anthropic an opportunity to respond.
Techcrunch event
San Francisco, CA
| |
October 13-15, 2026
Another point in Heck’s announcement is sure to show his interest is that on March 4 – the day the Pentagon completed the declaration of security against Anthropic – Secretary Michael sent an email to Amodei saying that the two sides were “very close” on the two points that the government claims as evidence that Anthropic is a threat to national security: its location on autonomous weapons and American military equipment.
The email, which Heck presents as a reflection of his announcement, is worth reading alongside Michael’s public statements in the following days. On March 5, Amodei released a statement saying the company had “productive discussion“It’s the Pentagon.” The next day, Michael written on X that “there is no discussion of the War Department with Anthropic.” A week later, he told CNBC that there was “no chance” of renegotiation.
Heck’s point seems to be: If Anthropic thinking on these two factors is what makes it a national security threat, why did the Pentagon chief say that the two sides were about to come together for exactly those reasons after it ended?
Ramasamy brings some expertise to the case. Prior to joining Anthropic in 2025, he spent six years at Amazon Web Services overseeing the deployment of AI for government clients, including infrastructure in the organization. At Anthropic, he was credited with creating a team that brought Claude models into national security and defense, including A $200 million deal and the Pentagon announced last summer.
His announcement is in line with the government’s claim that Anthropic could disrupt military operations by blocking the technology or changing the way it operates, which Ramasamy says is technically impossible. According to him, with Claude placed inside a government-protected, “airtight” system operated by a third-party contractor, Anthropic has no access; no remote switch, no backdoor, and no way to push unauthorized updates. Any kind of “active veto” is a myth, he points out, explaining that model changes would require Pentagon approval and implementation.
Anthropic, he says, cannot see what government users are typing into the system, let alone delete the data.
Ramasamy also disputes the government’s claim that Anthropic’s foreign recruitment makes the company a security risk. He added that Anthropic’s employees had been vetted by the US government’s security clearance – a process of background checks required to obtain classified information, adding that “to my knowledge,” Anthropic is the only AI company where the white-collar workers have actually created AI models designed to run in designated areas.
Anthropic’s lawsuit says that the mention of the chain-chain threat – which was used for the first time in an American company – amounts to retaliation by the government because of the company’s public statements about the security of AI, against the First Amendment.
The government, in a 40-page filing earlier this week, refused to make it completelyarguing that Anthropic’s refusal to allow all military-approved uses of its technology was a business decision, not a protected term, and that the designation was a direct call to national security and not a punishment for the company’s intentions.