AI

Why Cohere’s ex-AI research lead is betting against the scaling race

AI labs race to build data centers as big as Manhattan, each costing billions of dollars and using as much energy as a small city. The effort is driven by a deep belief in “scaling” – the idea that adding more computing power to existing AI training methods will eventually produce super-intelligent systems capable of performing all kinds of tasks.

But a growing chorus of AI researchers say that scaling large language models may be reaching its limits, and that other breakthroughs may be needed to improve AI performance.

That’s the bet Sara Hooker, Cohere’s former Vice President of AI Research and a Google Brain alumna, is making with her new startup: Adaptation laboratories. She co-founded the company with fellow Cohere and Google veteran Sudip Roy, and it is built on the idea that scaling LLMs has become an inefficient way to get more performance from AI models. Hooker, who left Cohere in August, quietly announced the startup this month to start recruiting more broadly.

In an interview with TechCrunch, Hooker says that Adaption Labs builds AI systems that can continuously adapt and learn from their real-world experiences, and do so extremely efficiently. She declined to share details about the methods behind this approach and whether the company relies on LLMs or another architecture.

See also  Founded by DeepMind alumnus, Latent Labs launches with $50M to make biology programmable

“There is now a turning point where it is very clear that the formula of just scaling up these models – scale-based approaches, which are attractive but extremely boring – has not produced an intelligence capable of navigating or interacting with the world,” says Hooker.

According to Hooker, adaptation is the ‘core of learning’. For example, stub your toe as you walk past your dining room table, and you’ll learn to step around it more carefully next time. AI labs have tried to capture this idea through reinforcement learning (RL), which allows AI models to learn from their mistakes in controlled environments. However, current RL methods do not help production AI models (i.e. systems already used by customers) learn from their mistakes in real time. They just keep stubbing their toe.

Some AI labs offer consulting services to help companies tailor their AI models to their specific needs, but that comes at a price. OpenAI reportedly requires customers to do this spend more than $10 million with the company to offer fine-tuning consultancy services.

WAN event

San Francisco
|
October 27-29, 2025

“We have a handful of frontier labs that are defining this set of AI models that are offered the same way to everyone, and it’s very expensive to adapt them,” Hooker says. “And actually, I don’t think that has to be true anymore, and that AI systems can learn from an environment very efficiently. If we prove that, it will completely change the dynamics of who gets to control and shape AI, and really, who these models ultimately serve.”

See also  SaaS Affiliate Marketing: Secrets to Scaling Fast

Adaption Labs is the latest sign that industry confidence in scaling LLMs is wavering. A recent paper by MIT researchers found that the world’s largest AI models may soon show diminishing returns. The atmosphere in San Francisco also seems to be changing. The AI ​​world’s favorite podcaster, Dwarkesh Patel, recently hosted some unusually skeptical conversations with famous AI researchers.

Richard Sutton, a Turing Prize winner considered “the father of RL,” told Patel as much in September LLMs can’t really scale because they don’t learn from real-world experiences. This month, early OpenAI contributor Andrej Karpathy told Patel that he had reservations on the long-term potential of RL to improve AI models.

These kinds of fears are not unprecedented. In late 2024, some AI researchers raised concerns that scaling AI models through pretraining—where AI models learn patterns from reams of data sets—would lead to diminishing returns. Until then, pretraining had been the secret sauce for OpenAI and Google to improve their models.

These pre-training scaling concerns are now showing up in the data, but the AI ​​industry has found other ways to improve models. In 2025, breakthroughs in AI reasoning models, which require additional time and computing power to solve problems before answering them, will further expand the capabilities of AI models.

AI labs seem convinced that scaling RL and AI reasoning models is the new frontier. OpenAI researchers previously told TechCrunch that they developed their first AI reasoning model, o1, because they thought it could scale well. Meta and Periodic Labs researchers recently a paper released exploring how RL could further scale performance – a study that reportedly costs more than 4 million dollars, It underlines how expensive current approaches still are.

See also  SymbyAI raises $2.1M seed to make science research easier

Adaption Labs, on the other hand, aims to find the next breakthrough and prove that learning from experience can be much cheaper. The startup was in talks earlier this fall to raise a seed round of $20 million to $40 million, according to three investors who reviewed the pitch decks. They say the round has now closed, although the final amount is unclear. Hooker declined to comment.

“We plan to be very ambitious,” Hooker said, when asked about its investors.

Hooker previously led Cohere Labs, where she trained small AI models for business applications. Compact AI systems now routinely outperform their larger counterparts in coding, arithmetic and reasoning – a trend Hooker wants to continue.

She also built a reputation for expanding access to AI research globally, hiring research talent from underrepresented regions such as Africa. While Adaption Labs will soon open an office in San Francisco, Hooker says she plans to hire globally.

If Hooker and Adaption Labs are right about the limitations of scaling, the implications could be enormous. Billions have already been invested in scaling LLMs, on the assumption that larger models will lead to general intelligence. But it’s possible that truly adaptive learning could prove not only more powerful, but also much more efficient.

Marina Temkin contributed to the reporting.



Source link

Back to top button