Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

We are in a unique moment where AI companies are creating their own brand of startups.
First, there is a whole generation of corporate activists who made their name in the big tech companies and are now striking out on their own. You also have well-known researchers with extensive knowledge but vague commercial ambitions. There’s a clear chance that some of these new labs will become OpenAI behemoths, but there’s also room to do interesting research without worrying too much about commercialization.
The result? It’s hard to tell who’s trying to make money.
To keep things simple, I recommend a type of sliding scale for any foundation company. It’s five scales where it doesn’t matter if you’re making money – as long as you’re trying to. The idea here is to test ambition, not success.
Think of it in these words:
All the big names are at Level 5: OpenAI, Anthropic, Gemini, etc. Growth is more exciting with the new generation of labs being established now, with big dreams but ambitions that can be hard to read.
Most importantly, the people involved in these labs can choose the level they want. There is so much money in AI right now that no one is questioning them to make a business. Although the lab is a research project, investors should consider themselves happy to participate. If you are not interested in becoming a billionaire, you can live a happier life on Level 2 than on Level 5.
Techcrunch event
San Francisco
| |
October 13-15, 2026
Problems arise because it’s not always clear where the AI ​​lab falls on the scale — and much of the drama in the AI ​​industry stems from that confusion. Much of the concern about OpenAI’s transition from non-profit came from the fact that the lab spent years at Level 1, then jumped to Level 5 almost overnight. On the other hand, you could say that Meta’s original AI research was at Level 2, while what the company actually wanted was Level 4.
With that in mind, here’s a breakdown of four of today’s biggest AI labs, and how they stack up.
People & were This week’s big AI newsit’s part of the motivation that comes with this whole scale. The founders have compelling directions for the next generation of AI models, with upgrade orders giving an emphasis on communication and communication tools.
But for all the press, People & have been complaining about how it can translate into real money making products. It seems so he does want to create things; the team just can’t commit to anything else. The most they have said is that they will build it some kind of AI tool at work, changing things like Slack, Jira and Google Docs and redefining how these other tools work on a fundamental level. A professional software for professional software!
It’s my job to figure out what these things mean, and I’m still confused about that last part. But it’s specific enough that I think we can put them at Level 3.
This is very difficult to calculate! In general, if you have a former CTO and lead project of ChatGPT raising a seed of $ 2 billion, you should think that there is a very good road map. Mira Murati doesn’t strike me as someone who jumps in without a plan, so by 2026, I’d feel better putting TML at Level 4.
But then the last two weeks happened. The departure of CTO and co-founder Barret Zoph has grabbed a lot of headlines, in part because special events concerned. But at least five other employees left Zoph, many of them worried about where the company was going. In just one year, almost half of the executives from the founding team of TML no longer work there. One way to account for the situation is that they thought they had a solid plan to become the world’s top AI lab, only to find the plan wasn’t as solid as they thought. Or based on the scale, they wanted a Level 4 lab but realized they were at Level 2 or 3.
There isn’t enough evidence to confirm the download, but it’s getting close.
Fei-Fei Li is one of the most respected names in AI research, best known for developing the ImageNet problem that pioneered modern deep learning techniques. He currently holds a Sequoia Endowed Chair at Stanford, where he directs two different AI labs. I won’t bore you with going through the different honors and schools, but suffice it to say that if he wanted to, he could spend the rest of his life getting awards and being told how great he is. His book it’s so good!
Therefore in 2024when Li announced that he had raised $230 million for a ground-breaking AI company called World Labs, you’d think we were operating at Level 2 or lower.
But that was over a year ago, which is a long time in the AI ​​world. Since then, World Labs has shipped both a perfect example for the whole world and retail sales was built above it. At the same time, we have seen real signs of global demand from video games and specialty industries – and none of the big labs have developed anything that can compete. The results look very bad like a Level 4 company, probably soon to graduate to Level 5.
Founded by the former chief scientist of OpenAI Ilya Sutskever, Safe Superintelligence (or SSI) is seen as a classic example of a Level 1 startup. reject attempts to access from Meta. There are no production rounds and, apart from a type of overcooked base, there seems to be nothing. With these words, he raised $3 billion! Sutskever has always been more interested in the science of AI than business, and all indications are that this is a bona fide science project.
That said, the world of AI is moving fast – and it would be foolish to count SSI out of business. Get started his latest appearance as DwarkeshSutskever offered two reasons why SSI could stand up, either “if the timelines were longer, which they would be” or because “there is a lot of value in the best and most powerful AI that affects the world.” In other words, if the survey is going very well or very badly, we can see the SSI jump a few steps very quickly.