3 min

Google Cloud is releasing three new cloud storage services for AI workloads. Each service is designed for the needs of a specific AI workload.

AI brings new digital capabilities to businesses but also increases existing storage needs. Like other digital components of a business, AI can be stored in the cloud.

Every AI workload has different characterizations and is determined by the specific application that contains the AI and the underlying model that drives the AI. In general, it can be said that the technology is a heavy burden in terms of required storage space.

Google Cloud is now releasing three cloud storage services to meet the specific needs of different AI workloads.

AI applications that require much computing power

Training AI models require huge amounts of computing power, according to Google. For this scenario, the cloud service is releasing Parallelstore. The service would avoid wasting computing power by constantly presenting GPUs with data it needs to optimize the AI model.

Parallelstore can also serve AI applications that consume lots of computing power and need to respond quickly. Google Cloud says its service is based on the Intel distributed asynchronous object storage (DAOS) architecture which ensures that all nodes have the same access capabilities to the available storage. More importantly, the virtual machines running the GPUs can access it at any time, without having to wait.

Retrieving data from managed service Cloud Storage

Companies storing their data in the Cloud Storage service from earlier on can connect AI workloads to Cloud Storage FUSE. Google Cloud says it wants to deliver four benefits with the service: compatibility, reliability, performance and portability.

The new service provides access to data from Cloud Storage, which is packaged and separated into different components (buckets) in the service and allows it to be used as local files.

Specifically, the service seeks to provide a solution to AI workloads that require semantic file systems. In other words, these AI applications require their own file systems to store and access training data, models and checkpoints.

The promised compatibility and performance are guaranteed on the fact the Cloud Storage buckets can be used like local files. Reliability derives from the service’s integration with Go Cloud Storage client library. Finally, portability is in the ability to integrate Cloud Storage FUSE into the enterprise environment as a Linux package.

Google Cloud and NetApp

For the last service, Google Cloud is joining forces with NetApp. The parties are introducing Google Cloud NetApp Volumes that allow customers to transfer enterprise applications from Windows and Linux environments to the cloud service. Integration of SAP and VMware workloads is also possible.

“By extending our partnership with Google Cloud, NetApp is passing ONTAP’s storage and data management capabilities to Google Cloud customers as a first-party service, making it easy to leverage enterprise-grade storage in essential workloads, optimize file storage and support Windows, Linux, SAP and VMware workloads,” said Joost van Drenth, solutions specialist and field CTO at NetApp, in a press release.

This is not a cloud storage service that focuses on AI workloads but rather serves as a storage service for enterprise applications, in which AI can certainly have a place. Incidentally, Google Cloud NetApp Volumes is based on NetApp ONTAP software, so data management and protection are also part of the service.

While the storage need for AI workloads is rather a recent development, the integration between Google Cloud and NetApp should not have waited for this development. NetApp previously integrated its data storage and management solutions into cloud services AWS and Azure. The company now solidifies its position as an established storage service by integrating with Google Cloud.