The chip design company is branching out into several new sectors in addition to smartphones.
Chip design firm Arm, which is owned by SoftBank Group, reported a 28% increase in revenue for the latest quarter as it prepares for a highly anticipated initial public offering this year. Revenue rose to $746 million from $581 million a year earlier, driven by an increase in internet-of-things gadget adoption and high royalty rates in smartphones.
Diversification drives growth
Arm’s technology and designs have already penetrated the electronics industry. The chipmaker dominates segments like mobile phones, and its customers include the likes of Apple, Samsung and AWS.
Arm’s Chief Executive Officer Rene Haas has been tasked with expanding into additional markets and sectors. These include computers, data centers and automotive applications. In an interview on Tuesday with CNBC, Haas confirmed: “right now, we’ve diversified into cloud, automotive and IoT”. Because of this diversification, he says, the company has “done quite well” financially.
The growth is no doubt welcome news to SoftBank founder Masayoshi Son, who has bet heavily on Arm. Indeed, according to a report in Bloomberg, the Japanese billionaire has said that he expects the firm’s impending IPO to be the biggest ever for a chipmaker. That could be a difficult feat to accomplish, however, given the recent slide in technology valuations as the chip industry hits soft demand in key product segments.
SoftBank’s CFO, Yoshimitsu Goto, confirmed this week that the company was targeting an IPO for the British chip design firm sometime in 2023. That was all he would say, however.
Not just data centers, but also AI
Arm’s success has been built in part on its neutral position, which has allowed the firm to work with competitors across industries and geographies. According to CEO Haas, however, the company’s re4al strength lies in its ability to design for power efficiency. This, he said, is delivering Arm great opportunities not just in data centers, where power consumption is always a factor, but also in artificial intelligence.
The large language models used in AI, Haas explained, “require huge compute capabilities, which need to run on very power efficient architectures”, he said, adding “which is what we do”.