Kubernetes orchestrates. That’s what it does. Also ‘cutely’ known as K8s, Kubernetes is an open source software tool designed for automating the deployment, scaling and management of containerised applications.

Deeper here, as many will know, containerised applications are smaller (than a whole app) elements of code composed of just the operating system libraries and code dependencies needed to run.

Logically sometimes also referred to as a ‘lightweight executable’, containers are able to run in a consistent way on any given technology infrastructure with good portability, great resource efficiency and excellent architectural proximity to exist in the new cloud-native universe.

Containerisation contextualisation

Why all the containerisation contextualisation? Well, it’s probably because the still-nascent containerisation space always benefits from a little scene setting. It’s also because we need to explain where containers come together to build Kubernetes clusters and how we are going to work with them.

By now we have hopefully all realised that managing Kubernetes clusters is no point-and-click plug-and-play technology. This is why open source DevOps automation platform company Cloudify used its appearance at KubeCon + CloudNativeCon North American 2022 this month to explain how its technology addresses challenges in this sector of the technology fabric.

The company has now introduced Cloudify Discovery, a new automation feature for identifying and registering Kubernetes clusters in the Cloudify platform, which exists to automate cloud management and orchestration tasks. 

This is part of the company’s so-called Environment-as-a-Service (EaaS) technology intended to provide infrastructure automation to manage any cloud, any private datacentre or Kubernetes service from one central point. It also enables developers to self-service their environments.

Self-service DevOps environs

The new Cloudify Discovery feature automates the process of identifying Kubernetes clusters and registering them as self-service DevOps environments in the Cloudify platform. Developers’ burden of manually managing multi-cluster Kubernetes environments is replaced by an automated, scalable service that can handle even the most demanding multi-cluster edge environments. 

“The shift from monolithic to microservices in distributed, multi-cluster Kubernetes environments is maddeningly complex from a cloud management perspective,” said Nati Shalom, founder and CTO of Cloudify. 

CTO Shalom explains the mechanics of new Discovery feature, saying it allows platform engineers to make Kubernetes clusters accessible as a self-service environment. It can do this regardless of whether these clusters are running in the cloud or on-premises – and it provides a consistent layer of abstraction across different clusters such as EKS, AKS and GKE. 

For the record, those initialisms break down as

  • Amazon Elastic Kubernetes Service (EKS) 
  • Microsoft Azure Kubernetes Service (AKS)
  • Google Kubernetes Engine (GKE) 

“Developers can finally deploy microservices into environments as though they were all part of a single cluster using a simple filter and tagging mechanism. Developers can also deploy the same microservices across multiple clusters through a single command. In the future, Cloudify will also support policy-based deployment which will further abstract those environments,” clarified Shalom.

Why has the company focused on this practice and why does that matter so much to real world cloud-native containerisation use cases?

Multi-cluster mayhem

Because organisations are increasingly deploying multiple Kubernetes clusters across on-premises, cloud and edge environments – and so – the complexity of multi-cluster use cases ranges from simple (separate clusters between development and production environments) to moderate (segregated departments within the same enterprise) to extreme (managing hundreds and potentially thousands of clusters across edge networks). 

Getting that all executed and managed (let alone scaled) correctly is tough, grasping automation when and where it exists at this level might well sound appealing, especially if that automation is comprehensive enough to offer a complete cloud compute environment as-a-Service.

Magical analyst house Gartner estimates that by 2025, multi-cluster management and security will emerge as the top challenges for organizations deploying Kubernetes applications. Perhaps Cloudify is a Gartner customer, who knows?

Abstracting Kubernetes infrastructure

Cloudify Discovery boosts developer productivity by abstracting the developers from the underlying Kubernetes infrastructure. The Cloudify Discovery mechanism provides a generic implementation that scans a given account and looks for existing Kubernetes resources under this account. 

Once a Kubernetes cluster is discovered, it is registered as a self-service environment under the Cloudify environment section. Each environment includes the relevant authentication token, namespaces and other configuration properties associated with that cluster. Cloudify Discovery provides out-of-the-box discovery on all major clouds, including EKS, AKS and GCP, and is available through the Cloudify catalogue service.

The Cloudify Discovery implementation is based on the workflow and blueprint engine of Cloudify. The Discovery workflow executes the relevant API call to find the Kubernetes cluster resources and, in turn, calls Cloudify to set the respective environments per Kubernetes cluster. This generic mechanism can be easily modified or extended to support any other resource or map any resource information that will be needed to be part of the environment.

A scheduled workflow feature can be used to periodically run Cloudify Discovery. This keeps all environments updated as it monitors for newly created or updated environments and continuously syncs them with Cloudify environments. 

Our zero-touch future

This capability is also used to handle functions like zero-touch provisioning i.e. automatically detecting new environments as they become available without a need for any explicit configuration and setup phase.

If there is one phrase in the whole Kubernetes container discussion that typifies the actual physical (okay, still virtualised and abstracted) manifestation of where we are going with cloud automation today, it might be zero-touch provisioning. If the ability to leverage (ouch, sorry, no other word worked quite as well) out-of-the-box discovery across a maelstrom of multi-cluster use cases using simple filter and tagging mechanisms sounds cool, then that’s because it just might be.