4 min

With great power, comes great responsibility – clearly – but also, comes complexity and the need to shoulder a steep learning curve. The latter extension of the saying may be needed for cloud computing container orchestration technology Kubernetes.

The inherent complexity and power has been likened to the automobile’s internal combustion engine and the act of subsequently driving a car i.e. people enjoy the torque, handling and acceleration, but they don’t always want to know about everything happening under the hood/bonnet or motorkap as it is in the Dutch language. 

IBM fellow and CTO for IBM Cloud (and GM for IBM Cloud Platform and Common Services division) Jason McGee might have a job title almost as complex as Kubernetes, but he views the recent debate over operational complexity in this space with a sanguine view on how we might develop in the immediate the future.

Noting that many developers are spending too much time worrying about the underlying infrastructure of Kubernetes – rather than ‘just using it’ – he says that what makes the problem worse is that developers want to deploy different types of workloads on Kubernetes… and that reality (perhaps unsurprisingly) doesn’t make things any easier.

Kubernetes workload differentiation

What kind of different workloads? Different types of containerized applications, different application or data service functions, different event-driven workloads or different batch jobs.

IBM itself has identified this complexity and variety conundrum and has proposed that all these scenarios should be addressed by a single serverless platform, based on open-source technologies.

In 2021, IBM announced IBM Cloud Code Engine to help developers in any industry build, deploy and scale applications in seconds, while only paying when code is running. Code Engine was made available as a fully managed, serverless offering on IBM Cloud,” said McGee, who calls this moment the industry’s introduction to serverless 1.0, which was at that poing focused on enabling the implementation of endpoints (REST  APIs, web apps etc.).

While Serverless 1.0 offered many benefits to developers, only a small percentage of applications were able to run in serverless function. In particular, he explains that this tier of the technology methodology’s development did not account for heavy-duty computational applications, processing large workloads or for analysing data. 

Making serverless default

The team at IBM Cloud Code Engine insist that it has been focused on enabling the next era of serverless, logically enough being referred to as serverless 2.0, at least for now.

“We saw this first emerge with containers, as serverless is now the de facto standard for packaging applications. Developers want an infrastructure where cloud users can run containers, without worrying about ownership and management of the computing infrastructure or Kubernetes cluster they are running on. With IBM Cloud Code Engine serverless computing, IBM deploys, manages and autoscales our clients’ cluster. The serverless option enhances overall productivity, and also decreasing time to deployment – a win/win for deployers,” stated McGee, in an IBM blog.

One IBM user that is seeing the benefits of serverless is Sweap.io, a global event management company that says it has increased its time-to-market. As Sven Frauen, the CIO & co-founder of Sweap.io, noted in an IBM blog “IBM Code Engine empowers us at to handle peak demands (for example, for our email infrastructure with campaigns for large events). The auto-scaling capabilities allow us to focus on delivering value without having to worry about infrastructure management.”

So what happens next in this space? The engineers at IBM now want to elevate and apply the serverless approach to what are being called ‘more complex’ offerings. For now, that mostly refers to areas such as high-performance computing (HPC), but we can almost sniff a suggestion of quantum-in-waiting to come next if this is the trajector we are on. 

An execution-based pricing model 

McGee isn’t getting above his serverless station quite yet, he suggests that we already know why HPC works well for running massive simulations in industries such as when working on risk assessments. But still, in these deployments, he is adamant that enterprises are spending considerable amounts on hardware to support their computing needs. With a serverless architecture, IBM says that customers can cut out the hardware costs and work on an execution-based pricing model where they are only paying for the services they need.

Is everything alright now then? Has convoluted cloud Kubernetes complexity finally been conquered and we can all get on with concentrating on user functionalities and feature enhancements?

Well, perhaps, somewhat, a little, yes, but not at a level where neural cloud network operations staff will now be able to rest completely easy – there are many API-level connections and new service extensions still to shoulder, even in a world of serverless 2.0 as we stand today.

When is serverless 3.0 coming and what will it feature? That’s easy to answer: somewhere around 18-months or so and will a whole lot more AI on board, that’s the smartest answer for now.

Image credit: Sweap.io