AI

Reflection AI raises $2B to be America’s open frontier AI lab, challenging DeepSeek

Reflection AIa startup founded last year by two former Google DeepMind researchers has raised $2 billion at a valuation of $8 billion, a whopping 15x jump from its original valuation. Valuation of $545 million just seven months ago. The company, which originally focused on autonomous coding tools, is now positioning itself as an open source alternative to closed frontier labs like OpenAI and Anthropic, and as a Western equivalent to Chinese AI companies like DeepSeek.

The startup was launched in March 2024 by Misha Laskin, who led reward modeling for DeepMind’s Gemini project, and Ioannis Antonoglou, who co-founded AlphaGo, the AI ​​system that famously defeated the world champion in the board game Go in 2016. Their background in developing these highly advanced AI systems is central to their pitch, namely that the right AI talent can build frontier models beyond established tech giants.

Along with the new round, Reflection AI announced that it has recruited a team of top talent from DeepMind and OpenAI and built an advanced AI training stack that it promises will be open to everyone. Perhaps most importantly, Reflection AI says it has “identified a scalable commercial model that aligns with our open intelligence strategy.”

Reflection AI’s team currently numbers about 60 people – mostly AI researchers and engineers in infrastructure, data training and algorithm development, according to Laskin, the company’s CEO. Reflection AI has secured a compute cluster and hopes to release a borderline language model trained on “tens of trillions of tokens” next year, he told TechCrunch.

“We’ve built something once only thought possible in the world’s best labs: a large-scale LLM and reinforcement learning platform capable of training massive Mixture-of-Experts (MoEs) models at frontier scale,” Reflection AI wrote in a post on

See also  The Future of AI in Quality Assurance

MoE refers to a specific architecture that powers cross-border LLMs: systems that previously only large, closed AI labs were able to train at scale. DeepSeek had a breakthrough when it discovered how to train these models at scale in an open manner, followed by Qwen, Kimi and other models in China.

“DeepSeek and Qwen and all these models are our wake-up call because if we don’t do something about it, the global standard of intelligence will actually be built by someone else,” Laskin said. “It will not be built by America.”

WAN event

San Francisco
|
October 27-29, 2025

Laskin added that this puts the US and its allies at a disadvantage because companies and sovereign states often will not use Chinese models due to potential legal ramifications.

“So you can choose to live at a competitive disadvantage or rise to the occasion,” Laskin said.

American technologists have largely celebrated Reflection AI’s new mission. David Sacks, the White House AI and Crypto Czar, posted on X: “It’s great to see more US open source AI models. A meaningful segment of the global market will prefer the cost, customizability and control that open source offers. We want the US to win this category too.”

Clem Delangue, co-founder and CEO of Hugging Face, an open and collaborative platform for AI builders, told TechCrunch about the round: “This is indeed great news for American open-source AI.” Delangue added: “The challenge now will be to demonstrate the high speed of sharing of open AI models and datasets (similar to what we see in the labs that dominate in the field of open-source AI).”

See also  Alembic melted GPUs chasing causal A.I. — now it's running one of the fastest supercomputers in the world

Reflection AI’s definition of ‘open’ seems to focus on access rather than development, similar to Meta’s strategies with Llama or Mistral. Laskin said Reflection AI releases model weights – the core parameters that determine how an AI system works – for public use, while data sets and entire training pipelines remain largely proprietary.

“In reality, the model weights are the most impactful because anyone can use the model weights and tinker with them,” Laskin said. “The infrastructure stack, only a select handful of companies can actually use it.”

That balance also underlies Reflection AI’s business model. Researchers will be able to use the models freely, Laskin said, but the revenue will come from large corporations building products on top of Reflection AI’s models and from governments developing “sovereign AI” systems, that is, AI models developed and controlled by individual countries.

“Once you get into the area where you’re a large enterprise, you want an open model by default,” Laskin says. “You want something that you own. You can run it on your infrastructure. You can control the cost of it. You can tailor it to different workloads. Because you’re paying an ungodly amount of money for AI, you want to be able to optimize it as much as possible, and that’s really the market we serve.”

Reflection AI has not yet released its first model, which will be largely text-based and have multimodal capabilities in the future, according to Laskin. It will use the money from this latest round to acquire the computing resources needed to train the new models, the first of which the company plans to release early next year.

See also  Google Search rolls out Gemini's Canvas in AI Mode to all US users

Investors in Reflection AI’s latest round include Nvidia, Disruptive, DST, 1789, B Capital, Lightspeed, GIC, Eric Yuan, Eric Schmidt, Citi, Sequoia, CRV and others.

Source link

Back to top button