Microsoft has unveiled its first large-scale AI “factory” built with Nvidia hardware, underscoring its ability to power the next wave of advanced AI models. CEO Satya Nadella shared a video of the system, describing it as the first of many that will be deployed across Microsoft’s global Azure data centers. Each cluster contains more than 4,600 Nvidia GB300 rack computers equipped with the new Blackwell Ultra GPU, connected through Nvidia’s high-speed InfiniBand technology. The company says it plans to roll out hundreds of thousands of these GPUs worldwide to support OpenAI and other AI workloads.
The announcement comes shortly after OpenAI struck high-profile data center deals with Nvidia and AMD, committing an estimated $1 trillion toward building its own facilities. Microsoft’s move signals that while OpenAI is racing to establish its infrastructure, Microsoft already operates more than 300 data centers in 34 countries, which it claims are uniquely positioned to meet the demands of frontier AI. These new systems will be able to handle next-generation AI models with hundreds of trillions of parameters.
Beyond the technical specifications, the timing highlights Microsoft’s strategic message: that it has the infrastructure in place to meet surging AI demand at scale. The company is also reinforcing its close yet competitive relationship with OpenAI, its partner and rival in the race to dominate artificial intelligence.More details are expected later this month, when Microsoft CTO Kevin Scott speaks at TechCrunch Disrupt in San Francisco, where he is likely to share the company’s broader vision for scaling AI capabilities across its cloud platform. With Nvidia chips at the core and Azure’s global footprint, Microsoft is making a clear statement about its readiness to lead in the new era of AI computing.












