Nvidia announces new open AI models and tools for autonomous driving research

Nvidia announced new infrastructure and AI models on Monday as it works to build the backbone technology for physical AI, including robots and autonomous vehicles that can sense and interact with the real world.
The semiconductor giant has announced Alpamayo-R1, a open reasoning vision language model for autonomous driving research at the NeurIPS AI conference in San Diego, California. The company claims this is the first vision language-action model focused on autonomous driving. Visual language models can process both text and images together, allowing vehicles to “see” their environment and make decisions based on what they perceive.
This new model is based on Nvidia’s Cosmos-Reason model, a reasoning model that considers decisions before reacting. Nvidia initially released the Cosmos model family in January 2025. Additional models were released in August.
Technology like the Alpamayo-R1 is critical for companies looking to achieve Level 4 autonomous driving, which means full autonomy in a specific area and under specific conditions, Nvidia said in a blog post.
Nvidia hopes this type of reasoning model will give autonomous vehicles the “common sense” to better approach nuanced driving decisions the way humans do.
This new model is available on GitHub and Hugging Face.
In addition to the new vision model, Nvidia has also uploaded new step-by-step guides, inference resources, and post-training workflows to GitHub (collectively called the Cosmos Cookbook) to help developers better use and train Cosmos models for their specific use cases. The guide covers data curation, synthetic data generation, and model evaluation.
WAN event
San Francisco
|
October 13-15, 2026
These announcements come as the company is moving full speed ahead with physical AI as a new avenue for its advanced AI GPUs.
Nvidia’s co-founder and CEO Jensen Huang has repeatedly said that the The next wave of AI is physical AI. Bill Dally, Nvidia’s chief scientist, echoed that sentiment in a conversation with TechCrunch last summer, emphasizing physical AI in robotics.
“I think eventually robots will be a big player in the world and we basically want to make the brains of all robots,” Dally said at the time. “To do that, we need to start developing the key technologies.”
See the latest revelations on everything from agentic AI and cloud infrastructure to security and more from the flagship Amazon Web Services event in Las Vegas. This video is brought to you in partnership with AWS.



