CoreWeave is one of the rising AI stars, with a large amount of GPUs that customers can use for their intensive AI workloads. After Cisco already expressed interest in investing, Pure Storage is partnering with the company.
AI compute and fast storage go hand in hand. GPUs need continuous access to memory (and especially memory bandwidth) to optimally run AI workloads. This is because it needs to be fed data for training, fine-tuning or inferencing LLMs. Hence, the combination of Nvidia hardware within CoreWeave infrastructure together with a party like Pure Storage is obvious. CTO at Pure Storage Rob Lee speaks of a “shared commitment to AI innovation.” “The integration of the Pure Storage platform into CoreWeave’s specialized cloud environments allows customers to tailor infrastructure as they see fit and maximize performance.”
Tip: Why Cisco is interested in AI success story CoreWeave
From crypto to AI
As highlighted in our earlier blog about CoreWeave, the company was born during the crypto hype around 2017. Since 2019, however, it has been an operation focused on GPU-based cloud infrastructure, something that has been hugely lucrative since the rise of generative AI. Its own site also talks about things like VFX work, but AI will undoubtedly attract the most attention at the moment. CoreWeave is far from the only player in that field: Crusoe and DataCrunch are just two examples of similar GPU-driven competitors.
Still, CoreWeave has the wind in its sails, with possibly Cisco and now certainly Pure Storage as backers and allies. In the case of Pure Storage, it’s about offering the Pure Storage platform within CoreWeave’s environments. Both parties talk about a “no-compromise solution,” referring to the Nvidia hardware that will be brought together with Pure Storage’s all-flash solutions, which combine speed with capacity. It is already working hard to make HDDs obsolete with its proprietary architecture. New solutions tend to specifically target AI workloads in multiple ways. An overview of the more recent innovations can be found here:
Read more: Pure offers storage for AI and AI for storage
Crown prince of AI?
Who will actually become the chosen AI player is not yet clear. And also unknown is how many parties of this kind will get enough demand in the longer term. Several pain points are also being solved by a large number of vendors; the value of this differentiation is just as difficult to assess yet. Consider Google Kubernetes Engine, which this week proved capable of training LLMs with more than a trillion parameters thanks to 65,000 linked nodes. Today, that’s still overkill for anyone not named OpenAI, because of the models whose number of parameters is shared, Llama 3’s largest model tops the list with 405 billion. GPT-4 is said to have 1.8 trillion parameters. OpenAI, by the way, is working with Microsoft and together with Azure built an “AI supercomputer” itself.
A party like CoreWeave distinguishes itself by making its focus abundantly clear: AI compute, especially via GPUs, and partnerships with popular vendors in this area, without wanting to provide an entire public cloud. Whether this “AI hyperscaler” approach actually attracts large organizations looking to fine-tune their own models remains to be seen. Certainly, Cisco and Pure Storage seem to believe in it.