t>

OpenAI and Google employees rush to Anthropic security for DOD cases


More than 30 OpenAI and Google DeepMind employees issued a statement Monday in support of Anthropic’s a case against the US Department of Defense after the federal agency named the AI ​​company as a potential threat to the courts.

“The government’s treatment of Anthropic as a tax liability was an abusive and disproportionate use of power that is deeply problematic for our industry,” said the brief, which was signed by Google DeepMind chief scientist Jeff Dean.

Late last week, a Anthropic Pentagon a threat to help — usually reserved for foreign adversaries — after an AI company refused to allow the Department of Defense (DOD) to use its technology to spy on Americans or fire weapons. The DOD said it must use AI for any “legitimate” purpose and not be forced by a general contractor.

The amicus brief in support of Anthropic appeared on the docket hours after developer Claude filed two lawsuits against the DOD and other government agencies. Wired he was the first to report the matter.

In court reservationGoogle and OpenAI employees say that if the Pentagon “is no longer satisfied with the terms of its contract with Anthropic,” the agency could “simply cancel the contract and buy the services of another leading AI company.”

The DOD signed the deal with OpenAI shortly after Anthropic was put on the market – a move many of ChatGPT’s developers opposed.

“If allowed to proceed, this punitive action against one of the leading US AI companies will undoubtedly have ramifications for the industrial and scientific competitiveness of the United States in the field of artificial intelligence and beyond,” the brief states. “And it will open our minds to the dangers and benefits of today’s AI systems.”

Techcrunch event

San Francisco, CA
| |
October 13-15, 2026

The document also confirms that the Anthropic red lines are logical problems that need to be protected. In the absence of government regulations governing the use of AI, he says, the contractual and technical restrictions that developers place on their systems are the most important safeguards against misuse.

Most of the workers who signed the document also signed letters in the past few weeks to urge the DOD to remove the symbol and call the leaders their companies to support Anthropic and refuse to use their AI systems.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *