Nvidia has invested $2 billion in CoreWeave and is expanding the partnership to build more than 5 gigawatts of AI factories by 2030. These AI factories (data centers) provide companies with GPU power to support AI.
Nvidia and CoreWeave announce a significant expansion of their long-standing collaboration. The former is investing $2 billion in CoreWeave Class A shares at $87.20 per share.
The demand for AI computing is growing exponentially. To meet this demand, Nvidia and CoreWeave will further align their infrastructure, software, and platforms. CoreWeave develops and manages AI factories using Nvidia’s accelerated computing technology. Nvidia’s financial strength will accelerate the process of acquiring the land, power, and buildings needed for these facilities.
AI factories as the basis for industrial revolution
“AI is entering its next frontier and driving the largest infrastructure buildout in human history,” said Nvidia CEO Jensen Huang. “CoreWeave’s deep AI factory expertise, platform software, and unmatched execution velocity are recognized across the industry. Together, we’re racing to meet extraordinary demand for NVIDIA AI factories — the foundation of the AI industrial revolution.”
CoreWeave specializes in AI infrastructure. The company acquired AI model developer Weights & Biases in March and attracted Cisco’s interest with a valuation of $23 billion.
The collaboration also includes technical integration. CoreWeave will deploy multiple generations of Nvidia infrastructure, including the Rubin platform, Vera CPUs, and Bluefield storage systems. Nvidia, in turn, will test and validate CoreWeave’s AI-native software, such as SUNK and CoreWeave Mission Control, with the goal of incorporating it into Nvidia’s reference architectures for cloud partners and enterprise customers.
Michael Intrator, co-founder, president, and CEO of CoreWeave, emphasizes that the collaboration stems from a fundamental belief: “AI succeeds when software, infrastructure and operations are designed together. Nvidia is the leading and most requested computing platform at every phase of AI – from pre-training to post-training – and Blackwell provides the lowest cost architecture for inference.”