t>

No one has a good plan for how AI companies should work with the government


As Sam Altman found out on Saturday night, it’s a very difficult time to work for the US government. Around 7pm, the CEO of OpenAI He announced that he would answer the questions publicly on X, as a way to destroy his company decision to take the Pentagon contract that Anthropic had just left.

Most of the questions came down to OpenAI’s willingness to participate in mass surveillance and self-killing – a true Anthropic experience. he had forbidden it in his negotiations with the Pentagon. Altman often criticized the government, saying that it was not his role to set national policy.

“I strongly believe in the democratic process,” he wrote in one response, “and that our elected leaders have the power, and that we should all follow the rules.”

An hour later, he admitted to his surprise that most people seemed to disagree. “There’s more of an obvious conflict than I thought,” Altman said, “whether we choose democratically elected government or unelected corporations to have more power. I think that’s what people disagree about.”

It’s a defining moment for OpenAI and the tech industry as a whole. In his Q&A interview, Altman tapped into the logic of the defense business, where military leaders and staff are expected to step aside from civilian leadership.

But what’s more is that, as OpenAI transitions from a successful consumer startup to a national security startup, the company seems ill-equipped to manage its new projects.

Altman’s public hall came at his company a long time ago. The Pentagon was empty OpenAI against Anthropic because of the insistence on the limits of the security agreement and the automatic weapons. A few hours later, OpenAI announced that it had won the same contract that Anthropic had abandoned. Altman portrayed the deal as a quick way to ease the tensions — and it was a profitable one. But he seemed unprepared for the amount of backlash he received from the company’s users and employees.

Techcrunch event

San Francisco, CA
| |
October 13-15, 2026

OpenAI has been involved with the US government for years – but not like this. While Altman was presenting his case to Congressional committees in 2023for example, he was still following the board game book. He was enthusiastic about the company’s global turnaround while acknowledging its risks and actively engaging with lawmakers — a perfect combination to encourage investors while leaving the law behind.

Less than three years later, that option was no longer possible. AI is obviously so powerful and the main needs are so complex that it is impossible to avoid a strong connection with the government. What’s surprising is how unprepared both sides seem to be.

The latest major controversy is Anthropic itself, and US Defense Secretary Pete Hegseth planned on Friday to designate the lab as a potential threat. That threat is approaching every conversation like a shotgun. Like former Trump executive Dean Ball he wrote over the weekendthe name would have cut Anthropic off from hardware and business partners, destroying the company. It would be an unprecedented move against an American company, and even then it could be then he will be brought back to courtit will destroy in no time and send shockwaves through the industry.

As Ball explains the process, Anthropic was performing an existing contract under terms that had been set up a few years earlier – only to have management insist on changing the terms. It’s more than anything that can fly between private companies and send an interesting message to other retailers.

“Even after Secretary Hegseth stepped back and reduced his Anthropic threat, the damage has been done,” Ball wrote. “Many organizations, politicians, and others will operate under the assumption that tribalism now dominates.”

It’s a direct threat to Anthropic, and a big problem for OpenAI. The company is already under pressure from employees to maintain its status as a red line. At the same time, the right-wing media will be alert for any sign of OpenAI becoming their least political partner. In the middle of everything is the Trump administration, trying to make things as difficult as possible.

It can be said that OpenAI did not set out to be a security contractor, but because of its big ambitions, it was forced to play the same game as Palantir and Anduril. Entering the Trump administration means choosing a side. There are no political actors here, and the victory of friends will mean the destruction of others. It seems that OpenAI will pay a high price, whether in lost business or lost employees, but it is unlikely to come out unscathed.

It may seem strange that this disruption is coming at a time when there are more high-profile tech investors in Washington than ever before, but many seem completely happy with the nation’s sentiments. Among Trump-aligned venture capitalists, Anthropic has been seen as favoring the Biden administration in ways that could hurt big companies — a sentiment echoed by a Trump adviser. David Sacks’ response to the ongoing controversy. Now that the changes have been made, few seem willing to stand up for the fundamental principle of free enterprise.

This is a difficult situation for any company to live in – and while political players may benefit in the short term, they will be exposed as the political winds change. There’s a reason why, for decades, the defense sector was dominated by slow-moving, highly controlled conglomerates like Raytheon and Lockheed Martin. Working as the Pentagon’s industrial wing gave them the political cover they needed to avoid politics, focusing on technology without having to push for reform every time the White House changed hands.

Today’s competitors can move faster than their predecessors – but they haven’t planned for long.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *