Sailing on a promise to make cloud simple and scalable, DigitalOcean has now raised the mast on what it says is an easy-to-deploy advanced AI infrastructure service. Employing a packaged pre-architected pre-provisioned virtual machine technique branded as DigitalOcean GPU Droplets, AI developers can run AI experiments, train large language models and scale AI projects without complex configurations. It is, in other words, a GPUs-as-a-Service (GPUaaS, possibly pronounced guh-pooh-ass) for AI/ML implementations.
The company offers various GPU solutions — including on-demand virtual GPUs, managed Kubernetes and bare metal machines — for AI developers to tap into GPUs from capitalisation-focused accelerated computing company Nvidia.
DigitalOcean GPU Droplets are available in single-node configurations or multi-node configurations, depending on the workload requirements at hand. The company’s theory behind ‘droplets’ (this is Digital Ocean, a small bead or globule of its know-how would be a droplet, get it?) is a process of pre-aligned software provision to allow AI engineers to void the several steps and technical knowledge typically needed to configure security, storage and network requirements for a deployment.
DigitalOcean API suite
DigitalOcean GPU Droplets can be set up with a few clicks on a single page. DigitalOcean API users also will enjoy the benefits of the simple setup and management as GPU Droplets come integrated into the DigitalOcean API suite and can be spun up with a single API call.
“We’re making it easier and more affordable than ever for developers, startups and other innovators to build and deploy generative AI applications and move them into production,” said Bratin Saha, chief product and technology officer at DigitalOcean. “To do that, they need access to advanced AI infrastructure without the added cost and complexity. Our GPUs as a service open this opportunity to a much broader user base.”
Nvidia H100 GPUs
The company is also expanding its managed Kubernetes service to support Nvidia H100 GPUs, bringing H100-enabled worker nodes to Kubernetes containerized environments.
For wider background here, Nvidia H100 GPUs feature 80 billion transistors and are built using a TSMC 4N (Taiwan Semiconductor Manufacturing Company) process to run the company’s own Hopper GPU architecture. Named for the celebrated Grace Hopper (obviously), Hopper architecture (actually a microarchitecture) is a multiprocessor streaming approach with a fast memory subsystem. Nvidia CEO Jensen Huang doesn’t usually get enough time to explain these elements as he needs to be on three keynote stages per day, but it makes for nice contextual background.
DigitalOcean says that its infrastructure offerings lower the barriers to AI development by providing fast, easy, and affordable access to high-performance GPUs without requiring upfront investments in costly hardware.
The new building blocks are DigitalOcean GPU Droplets which provide NvidiaH100 GPU virtual servers, available in 1X and 8X configurations. DigitalOcean is offering Droplets with as little as one Nvidia H100 (the industry norm is typically closer to eight) providing more cost flexibility. There is also DigitalOcean Kubernetes GPU support, a managed Kubernetes service that supports Nvidia H100 GPUs, available in 1X and 8X configurations.
“Today’s announcement is one of many steps that DigitalOcean is taking on its roadmap to offer AI platforms and applications,” said Saha and team. “[We have] innovations that include a brand new generative AI platform designed to make it easier for customers to configure and deploy the best AI solutions for their needs, including agents such as chatbots. With these innovations, DigitalOcean aims to democratize AI application development by simplifying the otherwise complex AI tech stack.”
The next wave
Looking ahead to its next wave (Ed: enough with the ocean puns now please) DigitalOcean will also provide pre-built components like hosted large language models and data ingestion pipelines to enable businesses to use their own knowledge bases, allowing them to create AI-powered applications. As the productised packaged compartmentalisation of enterprise software continues in line with the general trend to augment through accelerators and automations powered by machine learning functions, established best practices and codified methodologies, the drive to offer GPU power in more easy-to-consume formats in this way runs in line with wider movements seen at the platform engineering level. We can expect more and it will inevitably come.