t>

The Anthropic Trap was self-inflicted


On Friday afternoon, while this interview was taking place, a news alert appeared on my computer: the Trump administration cut ties with Anthropic, a San Francisco AI company founded in 2021 by Dario Amodei and other former OpenAI researchers who left over security concerns. Secretary of Defense Pete Hegseth requested a national security law – which was designed to combat foreign threats – stopped the company from doing business with the Pentagon after Amodei refused to allow Anthropic technology to be used to monitor US citizens or autonomous drones that can choose and kill targets without humans.

It was a jaw-dropping sequence. Anthropic is now on the verge of losing a $200 million contract, as well as being barred from working with other defense contractors after President Trump posted on Truth Social ordering any federal agency to “stop using Anthropic’s technology.” (Anthropic said it would sue the Pentagon in courtnaming a security risk that is not legally licensed and “has never been publicly used in corporate America.”)

Max Tegmark has been warning for a decade that the race to build the most powerful AI machines is outstripping the world’s ability to control them. A Swedish-American physicist and professor at MIT started the book Future Life Institute in 2014. In 2023, he famously helped organize and an open letter — eventually signed by more than 33,000 people, including Elon Musk — is calling for a pause in the development of AI.

His view of the Anthropic problem is unrelenting: the company, like its opponents, has sown the seeds of its problems. Tegmark’s argument does not start with the Pentagon but with a decision made several years earlier – a choice, shared throughout the industry, to reject binding regulations. Anthropic, OpenAI, Google DeepMind and others have long promised to self-regulate effectively. Earlier this week, Anthropic quit the main point of his promise of security – its promise not to release more powerful AI systems until the company believes they won’t cause harm.

Now, in the absence of laws, there is not much to protect these players, says Tegmark. Here’s more from the interview, edited for length and clarity. You can hear the full discussion next week on TechCrunch’s Download StrictlyVC podcast.

You saw the Anthropic news now, what did you do first?

The road to hell is paved with good intentions. It’s great to think back to ten years ago, when people were so excited about how we could use artificial intelligence to cure cancer, increase prosperity in America and make America stronger. And here we are when the US government is angry with this company for not wanting AI to be used to control the majority of Americans, and not wanting to have killer robots that can autonomously – without human input – decide who to kill.

Techcrunch event

San Francisco, CA
| |
October 13-15, 2026

Anthropic has been identified as the first AI security company, yet it has partnered with security and intelligence agencies (as of 2024 at least). Do you think that is contradictory?

It is contradictory. If I can give a little bit about this – yes, Anthropic has been very good at marketing itself as being all about security. But if you look at the facts and not the rhetoric, what you see is that Anthropic, OpenAI, Google DeepMind and xAI have all talked a lot about how they care about security. None of them came out advocating for safety regulations like we do in other industries. And all four companies have now broken their promises. First we had Google – the big words, ‘Don’t be evil.’ So they dropped it. He then left another remote commitment in which he said he promised not to be harmed by AI. They gave up on this so they could sell AI for surveillance and equipment. OpenAI just dropped the word security from their conference. xAI shut down their entire security team. And now Anthropic, earlier in the week, dropped their most important security commitment – a promise that they won’t release powerful AI systems until they’re sure they won’t cause harm.

How did it happen that the companies that made the famous defense contracts are in this position?

All of these companies, especially OpenAI and Google DeepMind and Anthropic, have been pushing hard against AI regulation, saying, ‘Just trust us, we’ll regulate ourselves.’ And they flirt well. So we currently have less regulation of AI practices in America than sandwiches. You know, if you want to open a sandwich shop and the health inspector finds 15 rats in the kitchen, he won’t let you sell sandwiches until you fix it. But if you say, ‘Don’t worry, I’m not selling sandwiches, I’m selling AI girls to 11-year-olds, and they’ve been associated with suicide in the past, and then I’m releasing something called superintelligence that can bring down the US government, but I have a good idea about mine, check, check, check, check. sandwiches.’

There are food safety regulations and no AI regulations.

And this, I feel, all these companies have a problem. Because if they would have taken all of these promises that they made that day about how to be safe and sound, and put them together, and then go to the government and say, ‘Please take our free promises and turn them into US laws that bind even our stupidest competitors’ – that would have happened instead. We are in an infinite space. And we know what happens when there’s an entire corporate amnesty: you get it thalidomideyou get tobacco companies pushing cigarettes on kids, you get asbestos causing lung cancer. So it’s ironic that their refusal to have rules about what is and isn’t good to do with AI has now come back to bite them.

There is currently no law against building AI to kill Americans, so the government could request an emergency. If the industry had come out earlier and said, ‘We need this law,’ they wouldn’t be in this pickle. He really shot himself in the foot.

The industrial dispute is always a competition with China – if American companies do not do this, Beijing will say. Does the argument still hold?

Let’s examine that. The most common talking point with the people working in the AI ​​industry – now they have better and more money than the people working from the oil industry, the pharma industry and the military and allied industries – is that whenever anyone wants to control any kind, they say, ‘But China.’ So let’s look at that. China is in the process of banning AI girls completely. Not an age limit – they’re looking to ban all anthropomorphic AI. Why? Not because they want to please America, but because they feel that this is destroying the Chinese youth and making China weak. Apparently, it’s making America’s youth weaker, too.

And when people say that we have to rush to develop advanced intelligence to win against China – when we don’t know how to manage advanced intelligence, so the result is that humans lose control of the Earth with alien machines – guess what? The Chinese Communist Party is very much in control. Who in their right mind thinks that Xi Jinping will allow a Chinese AI company to build something that will bring down the Chinese government? It can’t be done. It would be very bad for the American government if it was defeated by the first American company to develop advanced technology. This is a threat to national security.

That’s forcing production – superintelligence as a threat to national security, not the economy. Do you see that feeling growing in Washington?

I think that if people in the national security community listen to Dario Amodei explain his vision – he has given a famous speech where he says that soon we will have information. the world of professionals in the data center – might start thinking: wait, did Dario just use the word ‘world’? Maybe I should put the world of experts in the data center on the same list I’m putting it on, because this is a threat to the US government. And I think soon, enough people in the US national security community will realize that uncontrolled intelligence is a threat, not a weapon. This is completely similar to the Cold War. There was a competition for control – economic and military – against the Soviet Union. We Americans won that one without participating in the second contest, which was to see who could put the most nuclear bombs in other powers. People realized that it was just suicide. No one wins. The same principles apply here.

What does all this mean for the development of AI? How close do you think we are to the behavior you are describing?

Six years ago, almost every AI expert I knew predicted that we were decades away from having an AI that could master language and cognition at a human level – maybe 2040, maybe 2050. They were all wrong, because we already have it now. We’ve seen AI progress rapidly from high school to college to PhD level to university professor in some areas. Last year, AI won the gold medal at the International Mathematics Olympiad, which is as difficult as human tasks. Me wrote a paper along with Joshua Bengio, It’s Hendrycksand other top AI researchers a few months ago are giving a solid definition of AGI. According to this, GPT-4 was 27% of the way there. GPT-5 was 57% of the way there. So we haven’t been there yet, but from 27% to 57% that shows it won’t last long.

When I taught my students yesterday at MIT, I told them that even if it takes four years, it means that after they graduate, they won’t be able to get a job. It’s not too late to start planning.

Anthropic is now not allowed. I want to see what happens next – will the other AI giants stand by and say, we can’t do this too? Or someone like xAI raising their hand to say, Anthropic didn’t want that deal, we’ll take it? (Editor’s note: After several hours of questioning, OpenAI made its announcement his business and the Pentagon.)

Last night, Sam Altman came out and said that he stands with Anthropic and has the same red lines. I admire you for having the courage to say this. Google, since we started this interview, has not said anything. If they stay silent, I think it’s embarrassing for them as a company, and many of their employees feel the same way. We haven’t heard anything from xAI. So it will be interesting to see. Basically, there is this moment when everyone has to show their true colors.

Is there a version of this where the results are good?

Yes, and that’s why I’m optimistic in a strange way. There is an obvious alternative here. If we start treating the AI ​​industry like any other industry – minus the corporate apology – they need to do something like a clinical trial before releasing something so powerful, and show independent experts that they know how to run it. Then we get the golden age and all the good things from AI, without the presence of angst. This is not the path we are currently on. But it could be.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *