8 min

Cloud is composable. It’s a term that we’ve heard used time and time again as enterprise technology vendors attempt to extol the virtues of cloud computing and tell us that the service-based model of compute, storage, analytics and more offers a composable & controllable route to freedom, choice and personal liberation.

A key part of that composable fabric is the use of containers. As we know, containers are the discrete and defined elements of software code built with everything needed to run an application workload, system process or some other smaller component part of a higher-level networked system of logic. 

In a world (or universe) of containerised computing composability, we have to work with an increased number of moving parts; this means that knowing what goes where at any given moment in time is no mean feat. This is why we now of course champion the use of container orchestration, with open source Kubernetes being the de facto standard in this space.

Also known as K8s, Kubernetes is a platform that works to manage Linux or Windows or other containers and elements of microservice architectures across private, public and hybrid cloud environments. Kubernetes software engineers (more usually just known as cloud computing engineers) would typically encompass software application developers, DevOps practitioners, systems architects and others in related disciplines who use it to automatically deploy, scale, maintain, schedule and operate a number of application containers, which would often be deployed across a number of cloud networked clusters, nodes or other locales.

Given the importance, prevalence and prominence of Kubernetes – and the fact that this month sees the Cloud Native Computing Foundation (CNCF) host Kubecon + CloudNativeCon Europe 2023 in Amsterdam – how should we view the technology as it stands today in 2023? In other words, what’s the state of Kubernetes?

Challenges ahead

VP of systems engineering at enterprise cloud company Nutanix Rob Tribe reminds us that Kubernetes is now the de-facto standard for containers. He says that most enterprises are now starting to evaluate and/or implement Kubernetes at scale, but overall, we’re still in the early days of this technology and there could and should be many more developments to come.

“Deploying Kubernetes at scale in an enterprise, in a cost-effective manner, has some significant challenges, a situation that we have highlighted in the fifth annual Nutanix Enterprise Cloud Index (ECI) report,” said Tribe. “According to Gartner, by 2027, 25% of all enterprise applications will run in containers, an increase from fewer than 10% in 2021. This is a significant challenge for many given most Kubernetes solutions are not meant to support enterprise scale, even less can do so in a manner that is cost-effective.”

Tribe suggests that his firm’s Nutanix Cloud Platform enables enterprises to run Kubernetes in a software-defined infrastructure environment that can linearly scale. Vishal Ghariwala, CTO for APJ & GC at SUSE has a considered take on where we are at today. He says that as Kubernetes gains mainstream adoption, customers acknowledge that selecting Kubernetes platforms requires a tailored approach. 

“Some will see value in opinionated Kubernetes platforms that include PaaS-like capabilities, even if they may be utilising only a fraction of the capabilities,” said Ghariwala. “Customers who prioritise platform engineering may prefer a Kubernetes platform that can manage distributed Kubernetes clusters in a unified manner while being flexible enough to integrate third-party DevOps tools to create a customised PaaS platform.”

Managed Kubernetes services on the cloud are favoured options for customers with limited resources and skillsets. The SUSE VP reminds us that mature customers may use more than one of these options due to differing business requirements and skillsets, and to minimise lock-in risks. “It is really encouraging to see that there are multiple paths to Kubernetes adoption as it enables customers to select the option that best fits their use case and requirements,” added Ghariwala.

Paradoxical platforming

At least one person has really zoned in on the issue at hand here.

“Kubernetes is at a paradoxical point. Its goal is to simplify the management of containers and it is achieving this at scale. But, it is presenting complexity in areas of adoption. The result is a technology that is ripe for platforming,”  said Ram Iyengar, developer advocate, Cloud Foundry Foundation.

Iyengar isn’t the only one to realise this, it’s a theme that reverberates throughout the industry. Emile Vauge, founder and CEO, Traefik Labs has openly stated that as organisations accelerate their adoption and use of Kubernetes in production, manually managing multiple clusters becomes untenable. After initial adoption he says, many enterprise IT organisations quickly realise that Kubernetes is simultaneously the most powerful yet complex platform ever to be deployed and managed. 

“Managing fleets of Kubernetes clusters introduces connectivity, management and security challenges at levels of unprecedented scale. The only way to efficiently navigate Kubernetes deployments at scale is to adopt a modern cloud-native infrastructure and operational model,” proposed Vauge. “Investing in a Kubernetes-native central control plane – that is fully GitOps-compliant and highly interoperable – will empower organisations to fast-track their Kubernetes deployments and accelerate their digital transformation initiatives,” he said.

Obviously, we’re in a period of time where Kubernetes-native central control plane specialists recommend the use of Kubernetes-native central control plane technologies. Spoiler alert and no prizes for guessing what Traefik Labs does. Self-serving promotional suggestions notwithstanding, the point is still made that we need to think about how we move away from manual management in the Kubernetes space.

A love-hate relationship

“There’s no doubt that containerisation is growing in popularity – we saw pulls and downloads of containers double in the last quarter. That popularity will continue to grow for developers because they are flexible to use and can be deployed where they are needed. From a cloud provider perspective, this means there is a bit of a love-hate relationship in place with Kubernetes,” said Matt Yonkovit, head of open source strategy at Scarf, a company that provides analytics on open source software usage and downloads. 

Yonkovit thinks that Cloud Service Providers (hyperscalers) would like to tie their customers to their services more specifically, but customer demand and the ability to win deals from other providers will mean that they have to support Kubernetes over time and compete on service quality and value they add.

“The big battleground for the future is still the security side of things,” he says. “You look around the expo halls and there is a reason that almost a quarter of the booths are from security companies – and that is that the security model for Kubernetes is still not fully standardised. Getting to software that is secure by default would be nice, some day.”

IBM Fellow and CTO for IBM Cloud Jason McGee points to another major development theme. He says that one trend that has emerged over the past couple of years is that Kubernetes and containers are being used more to handle high-performance computing workloads. So much so that Big Blue has recently announced its own AI, cloud-native supercomputer – IBM Vela – which runs on Kubernetes and containers.

“The meteoric success of Kubernetes can be attributed to several factors but two of the biggest are circumstance and community,” said Suda Srinivasan, VP of strategy and marketing at Yugabyte.

First says Srinivasan – it’s in the right place right time. “As more companies move their applications to the cloud and the technology stack becomes increasingly complex, Kubernetes helps simplify and automate the time-consuming operational tasks of container management, freeing up IT professionals to focus on more strategic tasks.” 

Second, he proposes – the continued value of open source and a committed community. Kubernetes proves that open source is a robust and reliable alternative to traditional legacy and licensed software.

“A strong and engaged community not only adds value to the project through contributions but also adopts and champions the software, increasing its reach and identifying new and diverse use cases. The success of Kubernetes is evident and shows no sign of slowing down. Every major public cloud and private cloud provider now offers a managed or deployed Kubernetes service offering, allowing enterprises to fully embrace app modernisation, everywhere,” surmised Srinivasan.

More deployment, more risk

The way many organisations deploy Kubernetes is increasing risk. Application development teams are evolving from deploying large clusters to multiple smaller ones. This provides greater flexibility to deploy and run applications but also creates complexity when applications need to talk to each other securely. This is the opinion of Sitaram Iyer in his capacity as senior director of cloud-native solutions at Venafi, a company known for its automated certificate management technology,

“Within every Kubernetes cluster, every line of code and microservice needs a machine identity for secure communication,” stated Iyer. “By deploying complex, multi-cluster strategies, a vast number of identities are being created that cannot be manually managed. Many companies are turning to solutions like service mesh (for e.g. Istio) to help manage this influx of machine identities. However, this is actually compounding risk as service meshes only support self-signed machine identities, which leave organisations vulnerable.”

Instead of relying on self-signed certificates for applications, Iyer claims that companies need to take machine identity management into their own hands by adopting a control plane to automatically renew, revoke and manage machine identities. Guess which company makes machine control plane identity technologies? Yes, it’s true, but the complexity factor is still well-illustrated.

Micha Hernandez van Leuffen, founder & CEO, Fiberplane sums up all the statements made here so far by saying that one of the biggest challenges facing Kubernetes today is striking a balance between its power and its complexity. While Kubernetes is an incredibly powerful tool for managing containers and infrastructure at scale, he is adamant about the fact that its steep learning curve can be daunting for developers looking to deploy their applications quickly and easily. 

Next, van Leuffen suggests that to truly become the de facto standard for container orchestration, Kubernetes and the way developers can operate it, must become more approachable and developer-friendly, without sacrificing the advanced features that make it such a powerful platform.

Adoption curve plateau

Finally then, let’s remind ourselves that the Kubernetes project is reaching a plateau in its adoption curve. So says ­Adolfo García Veytia, staff OSS Engineer at Chainguard, a company known for its software supply chain security technology.

García Veytia says that, overall, Kubernetes has matured substantially over the last few years and evolved into the de facto basis for new platforms. In particular, it now has facilities for both extension and runtime policy enforcement, which has enabled the core platform to become slow and boring and the exciting innovation to move into adjacent projects within the cloud native ecosystem.  

“We are successfully making it more boring, which means it is becoming more stable,” states García Veytia, in a welcome revelation. “This phase in the project’s maturity means the community’s investment will be destined less into building new features and more into ensuring its continuity. You can see this unfolding as member organizations are leveraging their expertise to help out in areas like infrastructure, artefact distribution and in our case supply chain security.”

The state of Kubernetes is apparent then. DevOps specialists think that Kubernetes needs more DevOps, identity specialists think it needs more identity controls and security layer specialists think that Kubernetes needs more security. Data analytics abstraction and democratisation specialists think that — ah okay, you get the point. The state we have reached is basically defined by the inherent power at the cost of complexity that Kubernetes offers. 

When we can progress to ‘boring’, stable and viewing Kubernetes as some kind of utility computing function that’s just there because it has to be, then (as is also the case with every other technology construct, service, methodology or practice) we will be in a good place.