An AI processor designed with the help of Broadcom should help OpenAI overcome a persistent problem: dependency. Nvidia, Google Cloud, and previously Microsoft were considered facilitators for the rise of the AI specialist, but that should be a thing of the past. However, it will probably take a long time.
The chip, whose existence has been confirmed to the Financial Times, is scheduled to be released in 2026. Incidentally, no one can purchase cloud credits to use it: only OpenAI will use the processor. It is unknown whether the company will be able to train new frontier models such as GPT-6 on the chip, which has not yet been mass-produced.
Breakthrough or helping hand
If OpenAI succeeds in training a state-of-the-art LLM with its own AI chip, it will be a first. China’s DeepSeek has already attempted to produce a successor to the DeepSeek-R1 model without Nvidia’s hardware, but has so far failed due to inadequate hardware and software tooling. As a rule, the AI industry is tied to Nvidia, albeit in different forms. Google, for example, is the only party that can make its mark as both a hardware and model builder with its own Tensor Processing Units. OpenAI is still a long way from that.
It is much more likely that OpenAI’s new chip is intended for the daily operation of GPT-5 and later LLMs. In that case, the processor will provide a boost to the AI player, with growing capacity leading to ever lower costs. This should free OpenAI from the current status quo, in which it spends billions of dollars to deliver ever-better AI models with a minimal margin.
Change of course becomes apparent
In February, sources confirmed that OpenAI would finalize the design of its first proprietary chip within months for production at TSMC. That move was already a follow-up to earlier plans by CEO Sam Altman to build AI chips with the support of Middle Eastern investors.
Broadcom CEO Hock Tan mentioned during the presentation of the quarterly figures that the company expects AI revenues to improve “significantly” by 2026. He mentioned orders worth more than $10 billion from a new customer, without identifying them further. It is almost certain that this is OpenAI. Earlier this year, Tan hinted at four potential customers who are “deeply involved” in creating custom chips, in addition to three existing major customers.
Internal focus
Reuters reported in October last year that OpenAI was working with Broadcom and TSMC on its own chip. This trio sounds like a golden combination for this project. After all, OpenAI knows better than anyone what it wants from AI hardware, Broadcom is a veteran in the field of all kinds of custom silicon, and TSMC is a master at manufacturing working processors as efficiently as possible. OpenAI has explored various options to diversify its supply chain and reduce costs. In addition to Nvidia, OpenAI currently also uses AMD chips to meet its infrastructure demands.