Kubernetes has been completely vendor-neutral since version 1.31, driven by the largest code migration the Kubernetes project has ever had to complete.
We described this version of Kubernetes in detail earlier. Still, it is good to take a moment to reflect on the transition that is now complete within the popular technology. “In-tree integration” of the largest five cloud providers is now a thing of the past. Kubernetes’ core code is now separate from AWS, Azure, Google Cloud, OpenStack and VMware vSphere. This has shrunk the codebase, simplifying agility and development time for future updates.
However, this also presents challenges. Adoption of Kubernetes versions is very gradual, fueled in part by the fact that there is no inherent security layer built in and so an update rarely comes out for security reasons. In short, end users still have time to adjust to the changes within 1.31. For many, the necessary changes have long since been made, as support for OpenStack (1.26) and AWS (1.27) had already been dropped. Separate components are required to connect to a specific cloud; sometimes those are external cloud controller managers. Migrations are well documented and easy to follow.
Necessary step
It is a logical and also necessary step for Kubernetes to be truly vendor neutral. Favoring the most widely used cloud players does not fit a general tool that has been adopted by more than 60 percent of enterprise organizations. Although Kubernetes was born in the cloud, use of cloud native tools is now well established in on-prem environments as well. The push towards the lightest possible Kubernetes layer fits right in with that.
In addition, it fits the direction Kubernetes is moving in, with modular components that can evolve independently. That already applied to security resources, which are much needed but external to Kubernetes. A recent example of this decentralization is the Gateway API, which will be updated separately from Kubernetes versions.
True hybrid cloud
The next step for Kubernetes is to better leverage hybrid environments. The tool should become “smarter” by recognizing w hen nodes in a cluster can run in a public or in a private cloud. In addition, the team behind Kubernetes will continue to work on integrating with other clouds through better tools and frameworks. This will standardize that process instead of having to sit within Kubernetes itself and keep track of it.
That move to a true hybrid cloud without the need for much manual work sounds like the most promising one. As a base technology for the cloud native world, it provides a lot less headaches for developers, and presumably a lot less cost for organizations.
Also read: CAST AI introduces KSPM for detecting Kubernetes security threats