According to Red Hat, software-defined storage is the foundation for the hybrid cloud. That’s why storage is a basic element for all the products of the open-source specialist. Techzine talked about this with vice president and general manager of Red Hat Storage, Ranga Rangachari.
The various Red Hat products should make it easier to adapt and manage IT environments to hybrid cloud environments. At the Red Hat Summit held earlier this year, a sufficient number of new products and solutions were presented to achieve this. For example, the complete Red Hat stack must now be the perfect enabler for the hybrid cloud.
However, if you look closely at the structure of this stack, you will see that it is based on two applications or elements: Red Hat Enterprise Linux OS (RHEL) and Red Hat Storage. We have written about RHEL before, but what about Red Hat Storage? How does this basic element help customers more easily switch to hybrid cloud environments, and how can the open-source specialist help with this?
Software-defined storage development
Before we go into this in more detail, we need to look at how software-defined storage has developed. According to Rangachari, the arrival of Linux-based containers and, of course, cloud environments made it necessary to have cloud-native storage solutions as well. This mainly meant the development of software-based storage solutions to move storage between on-premise and hybrid or public cloud environments in a consistent manner. More specifically, this form of storage was particularly suitable for hybrid cloud environments. This makes it easier for customers to scale up and down horizontally between different environments.
An important development in favour of software-defined storage was the emergence of applications within hybrid cloud environments, which can be seen as a trend. Almost all business processes nowadays use applications that have a necessary need for storage. Rangachari indicates that storage has in fact increasingly become a DevOps process. Especially with the introduction of containers and associated platforms such as Kubernetes, which now make it possible to move applications while they are running.
There is also another phenomenon that is making software-defined storage increasingly important: the rise of Hyper-Converged Infrastructure (HCI) environments. These environments are perfect for supporting computing power and storage in hybrid cloud environments.
Open-source software is very suitable for both developments. That’s why, in addition to its business operating system, Red Hat has developed a special basic storage abstraction layer, Red Hat Storage, to make it as easy as possible for customers to manage their storage needs.
Two clear paths
The open-source specialist’s storage applications can be divided into storage solutions for the HCI solutions and – currently very popular – container storage solutions, specifically for the container platform Kubernetes or Red Hat OpenShift.
The storage for the HCI solution Red Hat Infrastructure, whether or not based on the open-source specialist’s Open Stack infrastructure, is based on Ceph. Red Hat Ceph Storage is primarily intended for the automated management of large amounts of data, especially in Hyper-Converged environments. Naturally, all types of storage – file, block and object – are supported.
Companies and organizations can use Red Hat Ceph Storage for applications such as data analytics where large amounts of data spread over a multitude of analytical applications must be supported. Other storage workloads that are supported are those for on-premise storage cloud environments. The AWS S3 protocol allows applications to access the required storage with the same APIs. Whether in public or private cloud environments. Furthermore, this storage solution from the open-source provider is particularly popular for OpenStack applications. This is because Ceph can offer more scalability in the deployment and operational use of OpenStack.
Finally, according to Red Hat, Ceph is very suitable for storing backup and/or archive data. Several providers of backup applications have certified Ceph for this storage solution.
Storage and Red Hat OpenShift
Even more popular is how Red Hat deals with offering storage for containers, especially for the container platform OpenShift.
Red Hat OpenShift Container Storage is suitable for file storage as well as block storage and AWS S3 object storage. In OpenShift version 3.x, Openshift Container Storage was based on the technology of Gluster, which Red Hat acquired in 2011. Starting with OpenShift version 4.x, the technology of Red Hat Ceph Storage is used in combination with Rook.io as OpenShift storage operator.
Ceph is now the standard storage technology for the Red Hat Openshift Container Platform, says Rangachari. This means that it can run in the same place where the open-source container platform itself runs; on bare metal, in virtual environments, in containers or in all cloud environments.
In addition, within Red Hat Openshift, this solution acts as a basic storage layer for container-based applications that require long-term storage. This actually means that it works as a permanent location for data while containers come and go with applications in which that data is being used.
This is useful, for example, when companies and organizations don’t just want to develop applications, test them and take them into production. The advent of this storage solution for containers should also make it possible to store used containers and the data used in them. Think of a container for a database application such as MongoDB for which all storage must remain available.
Furthermore, the storage platform supports all services that come into play with containers, such as analytics, logging and registry. In terms of scalability, Red Hat OpenShift Container Storage supports large amounts of persistent volumes per OpenShift Cluster.
Attention to data in containers
Red Hat is in the process of developing its storage solutions, of course. One of the recent developments is to look at the data within the applications and how they can be moved – just like containers – in the hybrid cloud without any problems.
According to Red Hat, this has been quite difficult until now. Data likes to stay where it is. This makes it more difficult to move it to other locations – such as within the hybrid cloud and between public cloud environments – without affecting applications or end-users. In addition, the amount of data is growing so dramatically that it becomes difficult to keep track of where it is generated and stored. This makes it difficult to use all this data commercially.
Red Hat is changing this by using the technology of the Israeli company Noobaa, which was acquired at the end of last year. This makes it possible to have software-based data available across different hybrid cloud environments. The software makes it possible to write data in one place and, where necessary, to make it accessible in different environments.
Developers can now set certain policies for the data and its lifespan. This is an abstraction layer on top of AWS S3, Google Cloud Storage and Azure Blob Storage in public cloud environments. For on-premise applications, Red Hat Ceph Storage can also be used for this purpose. By laying an abstraction layer over the underlying cloud-based storage infrastructure, Noobaa technology gives developers a common set of interfaces and advanced data services for their cloud-native applications.
According to Rangachari, this technology should further expand the capabilities of both Red Hat Ceph Storage and OpenShift Container Storage. The technology is expected to be available in OpenShift Container Storage v4.2 in the fall of 2019.
According to Red Hat’s storage specialist, other future developments in the field of storage will take place with data analytics in mind. It will be particularly important that companies and organisations no longer need to carry out these analyses in a large data layer in the on-premise data centre or in other specific environments. According to the open-source giant, it will soon be more important that these analyses can be carried out at the place where they are created, i.e. at the edge. The open-source specialist will come up with appropriate solutions for this in the future.
Interest in secondary storage
Rangachari indicates that this market segment is certainly still being recognised with regard to another currently ‘hot topic’ within storage – the processing of so-called secondary storage. Especially because he points out that it is becoming increasingly important for companies and organisations.
Red Hat wants to collaborate with partners because they often have more expertise in this field. The company indicates that it is already keeping an eye on many start- and scale-ups for this. Ideally, Red Hat’s technology will be used to develop a secondary storage product. Especially for Hyper-Converged environments.
Red Hat is well on its way
Red Hat sees storage as an important foundation for all its solutions and, increasingly, for its container platform Red Hat OpenShift. The developments that it is now continuing with Noobaa’s software are a good example of this. All this in order to be able to offer the best for cloud-native applications and data in this area as well, and to be able to offer customers everything they need for the hybrid cloud. Of course, HCIs are not forgotten with Ceph and, of course, the Noobaa technology that can be used.
For the future, Red Hat is already well on the way. Important trends such as analytics, artificial intelligence or machine learning and secondary storage are on the radar. Red Hat is working hard to develop solutions for this, whether or not in collaboration with partners. We are therefore curious to see how all these initiatives will take shape in the future and also what the role of the recent integration with IBM will be.