4 min

Data centers are only becoming more important. The rise of the digital medium in enterprise means that organizations are becoming increasingly reliant on external hardware to function. Now the AI hype provides a motive to invest heavily in data center infrastructure renewal. Nvidia CEO Jensen Huang talks about hardware worth $1 trillion that needs to be revamped worldwide. Is he right?

The computing industry is going through two simultaneous transitions: accelerated computing and generative AI, Huang estimates. So there are more than enough reasons one can think of to opt for brand-new AI-accelerating hardware. Those with the money available could choose to invest hundreds of thousands of dollars in a fleet of Nvidia H100 GPUs, for example. This hardware can accelerate generative AI models up to thirty times faster than its predecessors within the Nvidia ecosystem. Time is money, especially when it comes to data center cycles. In the long run, the added efficiency also provides potential cost savings. In any case, those who doubt the sustainability of this continuous improvement will also feel reinforced by the bubbling competition. It will take some time, but Intel is an example of a chip company that has changed course because the AI hype has grown so much.

Now we can expect this kind of talk from Nvidia’s sales pitch. As the market leader, it can count on immense demand for its own AI chips. However, we wrote yesterday about the limited quantity of and access to the supply of its very latest chips. An investment in new hardware will only become more expensive for data center operators. In addition, there are external factors that contradict Huang’s idyllic presentation.

Energy, power, climate

None other than Ampere founder and former Intel president Renée James recently sounded the alarm about the AI hype in data centers. Speaking to Bloomberg, she revealed that the current application of AI hardware consumes far too much energy to be sustainable. With this observation, we should not forget that James has a product of her own to sell: Ampere processors. These Arm technology-based chips are more efficient than their x86 counterparts. However, her point still stands: we cannot endlessly transfer energy to giant data centers.

This problem is certainly not new. They create the need for innovation and smarter use of available resources. The skyrocketing inflation and energy costs of recent months has led to a possible pass-through to customers. The power grid is already overloaded here in the Netherlands, so simply throwing extra hardware at it will not be possible around here. Above all, climate change casts a long shadow over this issue, as it does everywhere actually.

Working smarter, is speed really necessary?

Of course, the Nvidia CEO cannot take into account locally varying topics such as energy costs, power grid load and inflation. However, other CEOs within the AI arena know that we cannot grow permanently. AI models are just about done expanding parameters, according to OpenAI chief Sam Altman, which will eventually keep the computing power required within limits. The key will be to seek improvement by working smarter.

This applies not only to developers of generative AI models. For data centers, this is also a truism. Greater efficiency can be achieved in ways other than switching hardware. Huang likes to sell chips, but he won’t be the first to prioritize other improvements. DataCenter Knowledge spoke with AI analyst at Omdia Bradley Shimmin, who somewhat questioned the Nvidia CEO’s statements. He said, “There is a different trend going on [than investment in new AI hardware, ed.] where researchers are learning to deal with smaller models with fewer parameters, highly curated datasets and smarter training and fine-tuning through the use of PEFT and LoRa, for example.” These two aforementioned techniques reduce the amount of parameters required for AI models. In addition, by no means all applications require hyper-fast AI. Many companies will get by with a somewhat compact form of AI or use it sparingly.

This is then specifically about the efficiency AI applications themselves, but for data centers themselves there is plenty to gain in this area. What is striking is that before the ChatGPT-inspired AI hype, we already saw the topic of AI in a different form come up regarding data centers. Execute VP of AI research company Nnaisense Ralf Haller named some of the tech’s potential in mid-2022 to reduce data center costs. After all, AI can optimize data centers in terms of energy consumption, to name one example. Haller recommended seeing a data center as a complete system and optimizing it.

Patience

Finally, we cannot ignore the fact that political bodies are working on AI laws. China, the EU and the U.S. all have initiatives on the table that will regulate the use of AI. Think about restricting the use of user data to improve large language models or guaranteeing copy rights in datasets. Such initiatives mean that as a data center manager, you don’t have to run straight to Nvidia with the debit card. After all, regulations may constrain the continued growth of AI applications in business. In the short term, the AI hype may cause companies to experience FOMO (fear of missing out), partly because prices only seem to be going up for the most coveted AI chips. However, it may be the case that patience is a virtue. After all, those who wait might as well get into AI hardware at a later date. By then, Nvidia will surely have another new chip ready to go that is significantly more efficient and powerful.