AI

A new test for AI labs: Are you even trying to make money?

We are at a unique moment for AI companies building their own base model.

First, there’s a whole generation of industry veterans who made their names at big tech companies and are now going solo. There are also legendary researchers with vast experience but ambiguous commercial ambitions. There’s a clear chance that at least some of these new labs will become OpenAI-sized behemoths, but there’s also room for them to do interesting research without worrying too much about commercialization.

The end result? It becomes difficult to tell who is actually trying to make money.

To make things simpler, I propose a sort of sliding scale for any foundation model company. It’s a five-level scale, where it doesn’t matter if you actually make money – only if you try. The idea here is to measure ambition, not success.

Think about it in these terms:

  • Level 5: We already make millions of dollars every day, thank you very much.
  • Level 4: We have a detailed multi-phase plan to become the richest people on earth.
  • Level 3: We have many promising product ideas, which will be revealed over time.
  • Level 2: We have the outlines of a draft of a plan.
  • Level 1: True wealth is when you love yourself.

The big names are all at level 5: OpenAI, Anthropic, Gemini, and so on. The scale is getting more interesting as the new generation of labs is launched, with big dreams but ambitions that are harder to read.

Crucially, the people involved in these labs can generally choose any level they want. There is so much money in AI right now that no one will question them for a business plan. Even if the lab is just a research project, investors will want to feel involved. If you’re not particularly motivated to become a billionaire, you might live a happier life at level 2 than at level 5.

The problems arise because it’s not always clear where an AI lab lands on the scale – and much of the current drama in the AI ​​industry stems from that confusion. Much of the anxiety surrounding OpenAI’s transition from a nonprofit organization came because the lab had been at Level 1 for years and then jumped to Level 5 almost overnight. On the other hand, you could argue that Meta’s early AI research was clearly at Level 2, when the company actually wanted Level 4.

With that in mind, here’s a quick overview of four of the largest AI labs today, and how they measure up on the scale.

People&

Humans& was the big AI news this week and part of the inspiration for coming up with this whole scale. The founders have a compelling pitch for the next generation of AI models, where laws of scale give way to an emphasis on communication and coordination tools.

But despite the rave press, Humans& has been cagey about how this would translate into actual revenue-generating products. It seems like it do want to build products; the team just doesn’t want to commit to anything specific. All they’ve said is that they’re going to build some sort of AI workplace tool, which will replace products like Slack, Jira, and Google Docs, but also redefine how these other tools work at a fundamental level. Workplace software for a post-software workplace!

It’s my job to know what this stuff means, and I’m still pretty confused about that last part. But it’s just specific enough that I think we can put them at level 3.

See also  OpenAI goes all in with Jony Ive as Google plays AI catchup

Thinking Machine Lab

This is very difficult to judge! When you have a former CTO and project lead for ChatGPT who raised a $2 billion seed round, you generally have to assume there is a fairly specific roadmap. Mira Murati doesn’t strike me as someone who jumps in without a plan, so coming into 2026, putting TML at level 4 would have felt good.

But then it happened in the last two weeks. The departure of CTO and co-founder Barret Zoph has made the most headlines, thanks in part the special circumstances involved. But at least five other employees left with Zoph, many expressing concerns about the company’s direction. Just a year later, almost half of the executives from TML’s founding team no longer work there. One way to read the events is that they thought they had a solid plan to become a world-class AI lab, but discovered that the plan wasn’t as solid as they thought. Or in terms of the scale, they wanted a level 4 lab, but realized they were at level 2 or 3.

There’s still not enough evidence to justify a reduction, but it’s close.

World laboratories

Fei-Fei Li is one of the most respected names in AI research, best known for creating the ImageNet challenge that kick-started modern-day deep learning techniques. She currently holds a Sequoia chair at Stanford, where she co-directs two different AI labs. I won’t bore you with going through all the different awards and academy positions, but suffice it to say that if she wanted to, she could spend the rest of her life just receiving awards and being told how great she is. Her book is pretty good too!

So when Li announced in 2024 that she had raised $230 million for a spatial AI company called World Labs, you might have thought we were operating at Level 2 or lower.

See also  OpenAI secures Microsoft's blessing to transition its for-profit arm

But that was over a year ago, which is a long time in the AI ​​world. Since then, World Labs has launched both a full world-generating model and a commercialized product built on top of it. Over the same period, we’ve seen real signs of demand for world modeling from both the video game and special effects industries – and none of the major labs have built anything that can compete. The result looks an awful lot like a Level 4 company, which may soon grow to Level 5.

Secure Super Intelligence Service (SSI)

Safe Superintelligence (or SSI), founded by former OpenAI chief scientist Ilya Sutskever, seems like a classic example of a Level 1 startup. Sutskever has gone to great lengths to keep SSI insulated from commercial pressure reject an attempted takeover of Meta. There are no product cycles and, apart from the still-baking super-intelligent base model, there appears to be no product at all. He raised $3 billion with this pitch! Sutskever has always been more interested in the science of AI than in business, and all indications are that this is essentially a real scientific project.

That said, the AI ​​world is moving quickly – and it would be foolish to keep SSI completely out of the commercial realm. On his recent performance in DwarkeshSutskever gave two reasons why SSI might pivot, either “if the timelines turned out to be long, which could happen” or because “there is a lot of value in having the best and most powerful AI out there making an impact on the world.” In other words, if the research goes very well or very poorly, we could see SSI jump up a few levels quickly.

Source link

Back to top button