2 min

Tags in this article

, , ,

Meta plans to purchase 350,000 Nvidia H100 GPUs this year to develop future AI models. In total, the computing power will reach the equivalent of 600,000 H100 chips by the end of 2024, according to CEO Mark Zuckerburg. That’ll be needed, as competitors are also buying in bulk to compete.

Zuckerberg stated in an Instagram Reels video that Meta has big AI ambitions. The holy grail of AGI (artificial general intelligence) is the main goal: AI that can match human intelligence. Such a breakthrough is reportedly already in the works at OpenAI and has caused internal uproar.

Billions heading to Nvidia

Specifically, Meta hopes to buy 350,000 H100 GPUs, considered the most advanced chips for training and developing AI. Analyst firm Raymond James states that a single H100 typically sells for $25,000 to $30,000. Despite Meta likely getting a discount for buying in bulk, the company is set to pay more than $9 billion to Nvidia, according to SiliconANGLE.

In addition, there are others seeking to buy Nvidia’s valuable products. Meta, like Microsoft, are estimated have received 150,000 H100 GPUs through 2023, Omdia states. Google, Amazon, Oracle and Tencent follow at a reasonable distance, each with an estimated supply of 50,000 H100s. To meet the huge demand for AI applications and continue to innovate, all of these companies, like Meta, hope to increase their overall AI computing power significantly.

Battleground changes

The focus is currently on the H100, but AI hardware comes in more shapes and sizes. For example, Meta talks about an intended computing power equivalent to 600,000 H100 GPUs, presumably consisting of models such as the older Nvidia A100 and less powerful L40S.

Also, the successor to the H100, the purported B100 (codenamed “Blackwell”), is likely to appear in the middle of this year. With faster HBM3e memory from SK Hynix, Nvidia’s overall top-of-the-line AI performance will increase significantly. With Nvidia holding more than 90 percent of the AI GPU market and already leading the way in performance, these developments strongly influence how quickly AI innovation can take place.

Meta’s AI approach is different

Meta takes a different AI approach than its competitors. Where OpenAI/Microsoft and Google keep their most advanced models secluded within their corporate walls, Meta says it opts for open source. At first, this connection to open source was accidental. Indeed, the first Llama model, intended only to be shared for research purposes, leaked in early 2023. Subsequently, Llama 2 was made available immediately, although not everyone agrees that this is actually an open source model because of restrictions on commercial applications.

Zuckerberg insists that future AI models would also be “open-source” in this way. “This technology is so important and the opportunities are so great that we should open source and make it as widely available as we responsibly can.”

Also read: Hyperscalers’ AI chip capacity is heavily underutilized