American AI chip developer Cerebras Systems has once again raised $1 billion in growth capital. This is a Series H financing round that values the company at approximately $23 billion.
The investment comes remarkably quickly after a previous capital round of $1.1 billion, which was completed just four months ago. The new round was led by Tiger Global. Benchmark, Fidelity Management & Research Company, Atreides Management, Alpha Wave Global, Altimeter, Coatue, 1789 Capital, and chip manufacturer AMD, among others, also participated.
The timing of the financing coincides with reports that Cerebras recently signed a multi-year agreement worth more than $10 billion with OpenAI for the supply of AI hardware. That could explain the sudden need for additional capital, as the production and rollout of the systems is very capital-intensive, SiliconANGLE reports.
Cerebras has distinguished itself for years with a different chip architecture. Instead of building AI accelerators from multiple separate chips, the company develops processors that consist of a single complete silicon wafer. According to the company, the current generation, the Wafer Scale Engine 3, contains approximately four trillion transistors. That is many times more than modern GPUs, such as Nvidia’s Blackwell B200.
Internal memory pool
Approximately half of the chip surface is reserved for an internal SRAM memory pool of 44 gigabytes. This allows many AI models to run entirely on the chip without constantly moving data to external HBM memory. This reduces delays that normally occur due to data transport between the processor and memory and should significantly speed up processing.
Wafer-scale chips have historically been rarely built due to manufacturing problems. The larger the chip, the greater the chance that manufacturing errors will render some of the transistors unusable. Cerebras attempts to overcome this by dividing the WSE-3 into approximately 900,000 separate cores. If a defect occurs in one part of the chip, the rest of the system can reroute data traffic so that the processor as a whole remains usable.
The chip is supplied as part of a water-cooled system, the CS-3. According to Cerebras, a single system delivers a computing power of 125 petaflops. Customers can link up to 2,048 of these systems in a cluster with a theoretical capacity of 256 exaflops. That would be sufficient to train very large language models with tens of trillions of parameters.
Documents previously submitted by Cerebras for an IPO show that the company posted revenue of $136.4 million in the first half of 2024. That was more than ten times as much as in the same period a year earlier. Losses fell from $77.8 million to $66.6 million during that period.
Cerebras later withdrew its IPO application, stating that the information was now outdated and no longer accurately reflected the company’s rapid growth. The company now reportedly plans to resubmit its IPO documentation, with the aim of possibly going public as early as the second quarter.