What happens when AI starts building itself?

Richard Socher has long been a major figure in AI, best known for founding early chatbot startup You.com and before that his work on Imagenet. Now he’s joining the current generation of research-focused AI startups with Recursive Superintelligence, a San Francisco-based startup that emerged from stealth on Wednesday with $650 million in funding.
Socher will be joined in the new venture by a group of leading AI researchers, including Peter Norvig and Cresta co-founder Tim Shi. Together they are working to create a recursive, self-improving AI model, one that can autonomously identify its own weaknesses and redesign itself to fix them, without human intervention – a long-standing holy grail of contemporary AI research.
I spoke with him on Zoom after the launch, where I delved into Recursive’s unique technical approach and why he doesn’t consider this new project a neolab, the informal term for a new generation of AI startups that prioritize research over building products.
This interview has been edited for length and clarity.
We hear a lot about recursion these days! It feels like a very common goal across different labs. What do you see as your unique approach?
Our unique approach is to use an open-ended approach to achieve recursive self-improvement, which no one has achieved yet. An elusive goal for many people. Many people already assume that this happens when you just do car research. You know, you can take AI and ask it to make something else better, which could be a machine learning system, or just a letter that you write, or, you know, whatever it might be, right? But that is not recursive self-improvement. That’s just an improvement.
Our main focus is to build truly recursive, self-improving superintelligence at scale, meaning the entire process of ideation, implementation and validation of research ideas would be automatic.
First [it would automate] AI research ideas, ultimately all types of research ideas, eventually even in the physical domains. But it is especially powerful when it is AI working on itself and developing a new kind of sense of self-awareness of its own shortcomings.
You used the term open-ended – does that have a specific technical meaning?
It does. In fact, Tim Rocktäschel, one of our co-founders, led the openness and self-improvement teams at Google DeepMind, working mainly on the Genie 3 world model, which is a great example of openness. You can tell it any concept, any world, any agent, and it just creates it, and it’s interactive.
In biological evolution, animals adapt to the environment, and then others adapt to those adaptations. It’s just a process that can evolve over billions of years, and interesting things keep happening, right? This is how we developed eyes in our [heads].
Another example is rainbow teaming, from another article from Tim. Have you heard of redteaming?
In the field of cyber security this means—
Red teaming must therefore also take place in an LLM context. Basically you’re trying to get the LLM to tell you how to build a bomb, and you want to make sure he doesn’t.
Now people can sit there for a long time and come up with interesting examples of what the AI shouldn’t say. But what if you test this first AI with a second AI, and that second AI now has the task of creating the first AI [try to] say all kinds of bad things. And then they can go back and forth for millions of iterations.
You can actually allow two AIs to evolve together. One keeps attacking the other and then comes up with not just one angle, but many different angles, and hence the rainbow analogy. And then you can vaccinate the first AI, and you become increasingly safer. This was an idea by Tim Rocktaeschel and is now used in all major laboratories.
How do you know when it’s ready? I suppose it was never done.
Some of these things will never be done. You can always become more intelligent. You can always get better at programming, math, and so on. There are some limits to intelligence; I’m actually trying to formalize those now, but they’re astronomical. We are still very far from those limits.
As a neolab it feels like you have to do something that the big labs don’t do. So part of the implication here is that you don’t think the big labs will achieve RSI [recursive self-improvement] by doing what they do. Is that fair to say?
I can’t really say what they do, but I think we approach it differently. We truly embrace the concept of openness and our team is completely focused on that vision. And the team has been researching this and doing papers in this area for the last ten years. And the team has a track record of significantly advancing the field and shipping real products. You know, Tim Shi turned Cresta into a unicorn. Josh Tobin was one of the first people at OpenAI and eventually led their Codex teams and the deep research teams.
I actually have a little trouble with this neolab category sometimes. I feel like we’re not just a laboratory. I want us to be a really viable company, to have really great products that people love to use and that have a positive impact on humanity.
When do you plan to ship your first product?
I’ve thought about that a lot. The team has made so much progress that we may be able to figure out the timelines of what we originally assumed. But yes, there will be products, and you will have to wait quarters, not years.
One of the ideas around recursive self-improvement is that once we have systems like this, computing becomes the only major resource. The faster you run the system, the faster it will improve, and there is no outside human activity that will really make a difference. So the race simply becomes: how much processing power can we throw at this? Do you think this is the world we’re going to?
Computing cannot be underestimated. I think that in the future a very important question will be: how much computing power does humanity want to spend to solve which problems? Here is this cancer and here is that virus. Which one do you want to solve first? How much computing power do you want to give it? Ultimately, it becomes a matter of resource allocation. It’s going to be one of the biggest questions in the world.
When you make a purchase through links in our articles, we may earn a small commission. This does not affect our editorial independence.




