Nvidia CEO Jensen Huang thinks the AI hype will continue for a while. Meanwhile, Nvidia has been part of the “trillion dollar club” for a few months now, but it is also buying back $25 billion in its own shares. Thus it seems that Huang and co expect even more of a value increase. Can it deliver enough GPUs to supply all datacenter customers?
These days, Huang can be found in many places. He now not only attends graphics-centric events, but also this week’s VMware Explore, for example. Nvidia now has partners everywhere: the most advanced H100 chips and other GPUs are the workhorses behind the AI hype, which knows no end. Each H100 would cost about $3,000 to produce and sell for 25,000 to 30,000 each. Everything from ChatGPT to GitHub Copilot runs on them, both for training the model on a dataset and “inferencing,” from which the technology gets its predictive value.
Almost exclusively data centers
The revenue figures don’t lie. $13.51 billion was brought in, of which $10.31 billion came from data center customers. Profits of $6.188 billion represent an 843 percent (!) increase over the same quarter last year. So one is also buying back $25 billion of its own shares, a sign that it expects its market value of more than a trillion (in dollars or euros, if you will) to grow even further.
It’s easy to forget that the company has long promoted graphics cards primarily for use in PC gaming. Meanwhile, the focus is on “accelerated computing,” a term Nvidia has used for years, by the way. In short, it means that specific workloads run on specifically designed chips instead of general-purpose CPUs. The architecture capable of converting data to visual information also appears to be well-suited for AI workloads. Nvidia had been aware of this for some time, which now gives it a huge advantage over competitors such as AMD and Intel.
It’s just as well as far as Nvidia is concerned, because after two cycles of cryptocurrency upturns and strong demand for PC components during the pandemic, both markets shrank sharply. Incidentally, it has also possibly turned chips that might originally have been suited for gaming into AI chips. In addition to the Hopper architecture of the coveted H100 GPU, it now produces the L40S, a card based on Ada Lovelace, which is also in the latest workstation and gaming cards.
Can it meet demand?
During an interview this week, Huang revealed that Nvidia is doing all it can to meet demand. Supply would “increase significantly for the rest of the year,” possibly in part due to the change in direction for the aforementioned Ada chips. With great urgency, the company is trying to produce as many GPUs as possible, while previous examples have shown that Nvidia does not manage this easily. This is because to increase capacity, they depend on chip manufacturer TSMC. For the current generation, that party bakes the Nvidia chips at 4 nanometers, but the successor will most likely switch to 3 nanometers. This is because any smaller process can provide efficiency and performance improvements. There, it will be a fight for the available silicon “wafers”: Apple is said to have already landed a hefty deal whereby it receives all chips on that manufacturing process from TSMC. Nvidia may have an alternative in Samsung, from which it previously obtained GPUs. It remains to be seen if it will be enough, but before 2023 it will stick with its current offerings. That presents opportunities to still meet demand before competitors can respond to shortages with competing products. For now, those remain absent.