Chinese technology companies are increasingly moving the training of their AI models to data centers outside China. In doing so, they are attempting to maintain access to more powerful Nvidia chips, which are now virtually unavailable in China itself due to tighter US export controls.
This is according to the Financial Times, based on sources with direct knowledge of the situation. According to the report, since April, when the US further restricted sales of Nvidia’s H20 chip, there has been a steady increase in AI model training at offshore locations. Southeast Asia, in particular, has become an important region where Chinese companies rent computing power to develop their latest large-language models.
Alibaba and ByteDance, among others, use data centers in this region. They do so through lease arrangements with data centers managed by non-Chinese parties. This allows them to circumvent US technology restrictions, as the chips are physically deployed outside of China.
DeepSeek remains an exception thanks to chip stockpile
AI developer DeepSeek is an exception. The company managed to build up a large stock of Nvidia chips before the US export bans and can therefore still use domestic training facilities. In addition, DeepSeek is working with Chinese chip manufacturers led by Huawei to optimize and develop a new generation of domestic AI chips.
This development underscores how geopolitical tensions surrounding advanced chips are reshaping the global AI infrastructure. Chinese technology companies want to maintain their position in the global AI competition and are therefore actively seeking ways to retain access to high-quality computing power. According to those involved, another factor is that demand for training capacity for large language models is growing faster than domestic infrastructure can keep up, especially now that several Chinese players are developing increasingly larger models simultaneously.
At the same time, Southeast Asia is a strategic fallback region for these companies, thanks to a stable regulatory climate, fast connectivity, and the rise of hyperscale facilities capable of handling heavy AI training.