AMD plans to release its MI300 chip this calendar year, which should provide competition against Nvidia’s H100 GPU. Because of the huge demand for AI hardware, AMD could well start raking it in from the new chips, even if the performance would not be comparable to Nvidia.
Jenny Hardy, portfolio manager at GP Bullhound claimed in conversation with Reuters that Nvidia is still facing shortages since demand for AI hardware has increased dramatically. GP Bullhound has shares in both AMD and Nvidia. So, according to Hardy, there is plenty of opportunity for AMD to fill the gap between supply and demand.
No LLM trick
The promises from AMD CEO Lisa Su have been very impressive since mid-2022. The performance-per-watt of the CDNA 3 architecture within the MI300 is expected to be five times higher than CDNA 2, something that is crucial for the enterprise sector. After all, the savings in power consumption may enable investments in more hardware. With that, perhaps a performance difference with Nvidia would not necessarily lead to less adoption, regardless of the shortfalls that that GPU giant faces.
Incidentally, AMD’s architecture does not possess dedicated LLM capabilities, on which the computations of generative AI depend. This is because there are no transformer engines on board. Su does state that there is still a lot of interest in the older MI250 chip, which can still be extremely suitable for less complex AI tasks.
More data center?
What AMD can draw hope from is that parties like Microsoft and Google may need a lot of data center expansions. This comes just at a time when consumer demand has dropped dramatically since the corona pandemic.
To supply China with AI chips, by the way, AMD still needs to invest in the appropriate hardware. Due to export restrictions from the U.S., the MI300 would be too powerful, as is also true of the Nvidia H100 chip, for example. Nvidia and Intel, however, have already released an export-ready variant.
Also read: ‘Zenbleed’ bug can loot sensitive data from Zen 2 processors