While OpenAI races to build AI data centers, Nadella reminds us that Microsoft already has them

Microsoft CEO Satya Nadella Thursday tweeted a video of his company’s first massive AI system – or AI “factory” as Nvidia likes to call them. He promised that this is the “first of many” such Nvidia AI factories that will be deployed in Microsoft Azure’s global data centers to run OpenAI workloads.
Each system is a cluster of more than 4,600 Nvidia GB300 rack computers featuring the in-demand Blackwell Ultra GPU chip and connected via Nvidia’s high-speed networking technology called InfiniBand. (In addition to AI chips, Nvidia CEO Jensen Huang also had the foresight to corner the market on InfiniBand when his company acquired Mellanox for $6.9 billion in 2019.)
Microsoft promises it will deploy “hundreds of thousands of Blackwell Ultra GPUs” as it rolls out these systems globally. Although the scope of these systems is eye-watering (and the company has shared many of them). more technical details for hardware enthusiasts to read), the timing of this announcement is also notable.
It comes just after OpenAI, its partner and well-documented foe, signed two high-profile data center deals with Nvidia and AMD. By 2025, OpenAI will, by some estimates, have collected $1 trillion in commitments to build its own data centers. And CEO Sam Altman said this week that more would follow.
Microsoft clearly wants the world to know that it already has the data centers – over 300 in 34 countries – and that they are “uniquely positioned” to “meet the demands of groundbreaking AI today,” the company said. These monstrous AI systems are also capable of running next-generation models with “hundreds of trillions of parameters,” the report said.
We expect to hear more later this month about how Microsoft is gearing up to serve AI workloads. Microsoft CTO Kevin Scott will speak at TechCrunch Disrupt, held in San Francisco from October 27 to 29.




