Google’s answer to the AI arms race — promote the guy behind its data center tech

Google just took a big step in the AI infrastructure arms race, elevating Amin Vahdat to chief technologist for AI infrastructure, a newly created position reporting directly to CEO Sundar Pichai, according to an internal memo. reported by Semafor. It’s a signal of how important this work has become as Google encroaches $93 billion in capital expenditures by the end of 2025 – a number that parent company Alphabet expects to be much higher next year.
Vahdat is not new to the game. The computer scientist, who earned a PhD from UC Berkeley and started as a research intern at Xerox PARC in the early 1990s, has spent the past fifteen years quietly building Google’s AI backbone. Before joining Google in 2010 as an engineering fellow and VP, he was an associate professor at Duke University and later a professor and SAIC chair at UC San Diego. His academic credentials are formidable – with what appears to be there 395 published articles – and his research has always been aimed at making computers work more efficiently on a large scale.
Vahdat already has a high reputation at Google. Just eight months ago at Google Cloud Next, he unveiled the company’s seventh-generation TPU, called Ironwood, in his role as VP and GM of ML, Systems and Cloud AI. The specs he listed at the event were also staggering: more than 9,000 chips per pod delivering 42.5 exaflops of computing power — more than 24 times the power of the world’s No. 1 supercomputer at the time, he said. “The demand for AI computing has increased by a factor of 100 million in just eight years,” he told the audience.
As noted by Semafor, behind the scenes Vahdat has orchestrated the unglamorous and essential work that keeps Google competitive, including those custom TPU chips for AI training and inference that give Google an edge over rivals like OpenAI and the Jupiter Network, the super-fast internal network that allows all its servers to talk to each other and move massive amounts of data. (In a blog post Late last year, Vahdat said that Jupiter is now scaling to 13 petabits per second, explaining that this is enough bandwidth to theoretically support a video call for all 8 billion people on Earth at the same time.) It’s the invisible plumbing that connects everything from YouTube and Search to Google’s massive AI training operations in hundreds of data centers worldwide.
Vahdat has also been heavily involved in the ongoing development of the Borg software system, Google’s cluster management system that acts as the brains that coordinate all the work in the data centers and whose job it is to figure out which servers should perform which tasks, when and for how long. And he has said he oversaw the development of Axion, Google’s first custom Arm-based general-purpose CPUs designed for data centers. unveiled last year and continues to build.
In short, Vahdat is central to Google’s AI story.
In a market where top AI talent commands astronomical pay and constant recruitment, Google’s decision to elevate Vahdat to the C-suite could also be about retention. Once you’ve made someone the centerpiece of your AI strategy for fifteen years, you make sure he or she stays.
WAN event
San Francisco
|
October 13-15, 2026




