Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

With the approval of CEO Sam Altman, OpenAI’s cooperation with the Department of Defense “has been accelerated,” and “the outlook does not look good.”
After Negotiations between Anthropic and the Pentagon ended On Friday, President Donald Trump ordered federal agencies to stop using Anthropic technology after that a six-month transition periodand Secretary of Defense Pete Hegseth said they are portraying the company’s AI as a strategic threat.
So, OpenAI was quick to announce it that it reached its conclusion that the Models should be sent to the designated areas. With Anthropic saying it was drawing red lines on using its technology for autonomous devices or mass surveillance at home, and Altman saying OpenAI had similar red lines, there were obvious questions: Was OpenAI honest about its security measures? Why was it able to get a deal while Anthropic wasn’t?
So just as OpenAI executives defended the deal on social media, the company also published it blog post explaining the process.
Instead, the post pointed to three areas where it said OpenAI models could not be used – mass surveillance at home, autonomous military systems, and “high-level decisions (for example systems like ‘public debt’).”
The company said that unlike other AI companies that have “reduced or completely eliminated their security tools and relied heavily on user policies as their main defense in deploying national security,” the OpenAI partnership protects its red lines “through a comprehensive, multi-tiered approach.”
“We remain security conscious, we deploy through the cloud, OpenAI employees are vulnerable, and we have strong contract security,” the blog said. “All of this includes the strongest protections available in US law.”
Techcrunch event
San Francisco, CA
| |
October 13-15, 2026
The company added, “We do not know why Anthropic was unable to complete this agreement, and we hope that they and other labs will consider it.”
After publication, Techdirt’s Mike Masnick said that the agreement “allows for domestic surveillance,” because it says private collection will follow Executive Order 12333 (along with several other rules). Masnick described the system as “how the NSA hides their domestic surveillance by recording communications across lines *outside the US* even though it contains information from / US citizens.”
In LinkedIn postOpenAI director of national security cooperation Katrina Mulligan said that much of the discussion about the language of the agreement is based on “the only thing that exists among the American people is the use of AI in the control of many people in the home and autonomous devices and a single plan for the use of a single agreement with the Department of War.”
“This is how this works,” said Mr. Mulligan, adding, “The deployment architecture is more important than the contract language (…) By limiting our deployment to the cloud API, we can ensure that our models cannot be directly integrated into devices, sensors, or other operational tools.
Altman also asked questions about the deal on X, where he had been he admitted that he was fired and it led to a huge backlash against OpenAI (to the point Claude of Anthropic discovered OpenAI’s ChatGPT in Apple’s App Store the public). And why?
“We wanted to keep things low, and we thought the offer was good,” Altman said. “If we are right and this will lead to a reduction in the distance between the DoW and the industry, we will be seen as professionals, and a company that went through a lot of pain to support the business. If not, we will continue to be known as (…) fast and careless.”