Huawei reports that it has developed its own HBM memory for the next generation of AI chips. The company presented the technology as an important step in increasing the performance of its Ascend processors and positioning itself as a serious competitor to Nvidia.
HBM, or High-Bandwidth Memory, plays a crucial role in the operation of modern AI chips. By stacking DRAM layers vertically, signal paths become shorter and the chip’s bandwidth increases significantly. This not only delivers higher performance, but also reduces energy consumption for data-intensive tasks such as training and applying large language models. Because the memory is placed directly next to the processor, unnecessary data movement is minimized.
This step is particularly important for Huawei, as US sanctions prevent it from accessing HBM technology from foreign suppliers. With its own solution, the company aims to break this dependency and strengthen its technological autonomy.
The first generation consists of two variants. The HiBL 1.0 has a bandwidth of 1.6 terabytes per second and a capacity of 128 gigabytes. This version will be used for the Ascend 950PR, which will be launched in the first quarter of next year. The memory supports a variety of low-precision data types, including FP8 and MXFP8, and is designed to provide improved vector computing power and double the number of interconnections.
In addition, there is the HiZQ 2.0, intended for the Ascend 950DT. This variant has a bandwidth of 4 terabytes per second and a capacity of 144 gigabytes. According to Huawei, the emphasis here is on accelerating inference and improving decoding performance.
New SuperPod technology
At the same time, Huawei presented the new SuperPod technology, which allows up to 15,488 graphics cards with Ascend chips to be linked together. The company states that it now has a supercluster with approximately one million cards in operation. This approach is intended to compensate for the fact that a single Huawei chip is less powerful than Nvidia’s most advanced AI processors. By bringing chips together in large clusters, Huawei aims to deliver competitive performance.
The manufacturer has also announced a roadmap for the coming years. The Ascend 950PR, which will be released early next year, will be followed by the 950DT at the end of 2026, the 960 at the end of 2027, and the 970 at the end of 2028. This underscores Huawei’s ambition to gain market share in the AI chip market in the coming years.
With the combination of self-developed HBM memory, cluster technology, and a new chip series, Huawei is taking a clear step toward challenging Nvidia. The introductions show that, despite Western sanctions, the Chinese chip industry is steadily continuing to develop its own alternatives.