Why OpenAI bet big on AMD with $90B deal

Why OpenAI bet big on AMD with $90B deal

AMD has sold 6 gigawatts and $90 billion worth of AI capacity to OpenAI. In addition, the arrangement sets up the ChatGPT maker to purchase up to 10 percent of AMD’s shares. The deal inspires extraordinary confidence in Nvidia’s biggest competitor. Why?

First, OpenAI will use a single gigawatt of Instinct MI450 GPUs for AI calculations. As it happens, this chip design will not be released for another year. The rest of the enormous capacity will consist of AI processors yet to surface even on an AMD roadmap, as only the MI500 series is officially confirmed to land beyond 2026.

Outside of the technicalities, the partnership has already turned out to represent rocket fuel for AMD’s stock price. At the time of writing, the chip manufacturer’s market cap has shot up by almost 24 percent in a single day. There is certainly confidence in the continuing explosion in demand for AI capacity, emphatically so even.

Why AMD?

It’sclear that OpenAI intends to draw from multiple silicon sources. Exactly two weeks ago, CEO Sam Altman’s company announced that it would purchase 10 gigawatts of Nvidia systems. The latter would also invest up to $100 billion in OpenAI as these gigawatts get rolled out. Parallel to the AMD deployment, OpenAI will also roll out the first of these gigawatts with Nvidia’s Vera Rubin systems in the second half of 2026. Expect lights to dim in households across the globe as these gigaclusters go live, then. Irrespective of anything else, the sheer demand on electrical grids will leave some heads heavily scratched.

At any rate, the relationship between OpenAI and Nvidia is different from that between OpenAI and AMD. In one scenario, Nvidia is effectively buying pieces of its own customer if said customer continues to be a profilic one, while AMD is financially motivated to deliver high-performance processors and is the recipient of further investment. In other words, the better AMD performs as a rival to Nvidia, the more it is rewarded by OpenAI for doing so. In addition, OpenAI is buying influence and control over the day-to-day management of AMD by becoming a shareholder, which aligns the objectives in the future, too.

AMD’s open attitude

AMD is transparent about its own objectives, OpenAI deal or not. Whereas Intel failed to deliver on its promises regarding AI chips and Nvidia CEO Jensen Huang mainly talks in hyperbole about AI factories, AMD’s tactics are more fleshed out and pragmatic, yet ambitious. Comparable performance in AI training and inferencing compared to Nvidia is combined with an open software stack. It may well be a winning formula, although there need not be a loser.

Such openness is an increasingly attractive feature compared to Nvidia’s walled garden, as every major customer wants to avoid its potentially permanent lock-in. This means adopting technologies that compete with Nvidia’s CUDA, the name for both the programming language for AI chips and the software tooling that surrounds it.

However, to be a formidable competitor to Nvidia, AMD must execute on “rackscale” AI. This does not refer to individual AI chips, but rather complete “AI boxes”, if you will, that fit into standard data centers. AMD’s future offering in this format is known as Helios, and it combines GPUs, DPUs and CPUs with Ethernet connectivity for a compatible, scalable architecture. Note that such scalability only applies to data centers that are able to facilitate particularly high wattages per rack and the appropriate water cooling to tame such artificial beasts. To get the most out of Helios and its kin, these rackscale systems need to be grouped together in large clusters. This makes it possible to unleash tens or hundreds of thousands of GPUs simultaneously on the heaviest AI workloads, such as training GPT-5 or its successors. The ones opting for such a strategy are typically hyperscale customers, mainly Microsoft, Amazon, Google, Oracle, OpenAI, and Meta. There aren’t many of these, as simple addition proves, but they opt for the highest hardware refresh frequency that AI chipmakers allow for.

Read also: AMD’s bold claim: Nvidia has no moat

Multiple winners

The overarching question remains whether all the astronomical investments in AI infrastructure will ever pay off. Obviously, we cannot answer that question. However, it is clear that AMD’s proposition is really catching on to one of the few customers that truly matter in a financial sense. After all, the bulk of Nvidia’s AI revenue comes from the aforementioned big spenders, similar to what AMD is angling for. It’s telling enough that the AI advance has mainly focused on Nvidia and has only shifted to include AMD as it has become truly competitive, not just on benchmarks but also in terms of its ecosystem. These major deals also prove something else: no matter how promising startups such as Cerebras and Groq may be, the ‘big boys’ in the world of AI chips are still only Nvidia and AMD, with a huge gap in adoption even just between those two.

In a single day, the impression that AI isn’t yet a big success for AMD has been erased. Investors seemed to think so two months ago after somewhat disappointing quarterly figures. It shows that the stock market valuation of any of these companies is not a good indication of the real relationship between the AI players. Conversely, deals like today’s do not in any way prove that the promise of AGI will come true or anything like that – we must remain skeptical about that. OpenAI simply needs however many chips it can get, AMD can supply some of them, and they are good enough not to be overlooked. In the AI world of 2025, that is already a major success against the monolithic Nvidia.

So, to answer the question in the title: forget the false dichotomy. It’s a simple case of OpenAI opting for all viable AI chips to max out its capacity, and it will remain that way for a while.

Tip: In addition to OpenAI’s models, Microsoft is also deploying Claude