14 min Applications

The state of cloud-native computing in 2025

The state of cloud-native computing in 2025

Cloud-native, meaning cloud-first, you’re kidding, right? Sorry, it’s not 1999 anymore, we’ve moved past the CIO & CTO conversations that happened in the early days of cloud, where the incredulous notion of hosting data and application resources in a Software-as-a-Service (SaaS) model was thought to be the talk of mad folk. As we know, the last quarter century has seen on-premises private cloud dovetail with hybrid cloud, while the gargantuan movement of public cloud SaaS eventually gave birth to the notion of cloud-native.

So much has the notion of cloud-native now been galvanised, standardised and validated, that we now have the Cloud Native Computing Foundation (CNCF) to thank for the KubeCon CloudNativeCon symposium, a technology conference with an impressive list of world locations and an enviable attendee roster. Given this week’s gathering of the cloud-native glitterati in Atlanta, Georgia, for this month’s event, now feels like the perfect time to ask… what is the state of cloud native in 2025?

According to the newly released State of Cloud Native 2025 report by CNCF and SlashData, there are 15.6 million developers adopting cloud-native tools, which means that cloud-native technologies are now a cornerstone of modern software delivery. However, only 30% of backend developers surveyed say they use Kubernetes, a decrease from its peak of 36% in Q3 2023. 

“Further, only 41% of professional AI developers are cloud-native, despite their infrastructure-heavy workloads. Since Kubernetes provides the backbone for many other services, this suggests that these developers may not consider themselves as users of Kubernetes, despite using technology frequently dependent on it. This is to be expected, as more and more well-abstracted internal platforms and platform engineering maturity hide Kubernetes complexity. However, organisations should ensure sufficient Kubernetes expertise exists somewhere in their teams, as troubleshooting and optimisation often require understanding the orchestration layer, even if most application developers are appropriately shielded from it,” said Bob Killen, senior technical program manager, CNCF.

Betty Junod, Heroku CMO at Salesforce has many thoughts on the state of cloud-native. She suggests that after what is now a decade, Kubernetes has reached a point of maturity and adoption where it is no longer the focal point of the conversation, because it is the default. 

AI is the disruptor

“The ecosystem around Kubernetes continues to grow in complexity with an emphasis on how to optimise it to extract more value, make it smaller for the edge, or address more of the app portfolio. AI is the biggest disruptor in recent years, with the opportunity to exponentially increase the volume of apps created by bringing in an entire new class of builders into the ecosystem and will challenge supportability, observability and operations,” said Junod.

Mike Milinkovich, executive director at the Eclipse Foundation thinks that the cloud-native revolution has “fundamentally changed” how software is built, deployed, observed and managed. He says that it has enabled continuous integration and deployment, more efficient use of compute, storage and network resources and hybrid and multi-cloud architectures. 

“Fundamentally, Kubernetes and the vast ecosystem that has sprung up around it made cloud safe for the enterprise, spurring an enormous transition across the entire industry,” advises Milinkovich. “It is important to recognise that this all happened because of the power of open source and open collaboration. No proprietary technology could have possibly had the same impact. Every layer of the modern stack, from cloud-native Java and Kubernetes orchestration to eBPF networking and OpenTelemetry observability, has matured through open source licensing, collaboration and governance. This open source ecosystem is now enabling the next generation of innovation such as large-scale deployments of agentic AI systems.”

The always-affable Stu Miniman, senior director of market insights for hybrid platforms at Red Hat is guaranteed to have plenty to say on the state of the cloud-native nation. 

“We’ve reached a level of maturity in the cloud-native ecosystem that people might think that things are now a bit boring. While AI is a natural extension of Kubernetes and cloud-native architectures, there are changes required in the architecture to support AI workloads compared to previous workloads. Platform engineering continues to have strong customer interest… and new AI enhancements allow for even greater productivity for developers and operators. Additionally, changes in the virtualisation landscape open the door for those who want to embrace modernisation while being able to maintain VMs alongside containers, leveraging KubeVirt with supporting projects and a broad ecosystem,” said Miniman, in a briefing prior to this week’s event, with his trademark enthusiasm. 

Welcome to continuous everything

Over the last decade, cloud-native gave us near limitless velocity, instant deploys, automated pipelines and continuous everything.  This is the opinion of Nishant Modak, founder and CEO of Last9.

“But speed has exposed a new constraint: comprehension and actionability. We’re shipping more code than ever, but debugging still takes as long as it always has and ownership is still fuzzy,” said Modak. 

Modak suggests that the real challenge of today isn’t just how to deploy faster, it’s how to understand our systems faster and react before users feel the pain. Software developers and data scientists are therefore now compelled to engineer an additional level of unified observability through error budget-aware tools that offer clear visibility into service health, dependencies and performance in their approach to building cloud-native architectures that will be performant, robust and effective.

“We’re seeing Kubernetes evolve from an orchestration platform into the foundation for production-grade AI and multi-tenant environments. Cloud-native today is about consistency; deploying and managing compute and data seamlessly across clouds, on-prem and at the edge. The rise of real-time inference, observability and serverless orchestration means developers need platforms that can automatically scale and react to data as it arrives. Open standards like Kubernetes, Kafka and S3 are the enablers, ensuring freedom from lock-in and allowing innovation to happen everywhere,” said Alon Horev, co-founder and chief technology officer at VAST Data.

Sida Shen, product manager at Celerdata says that – when asked what the backbone of enterprise analytics is for cloud-native workloads – the answer is generally the data warehouse.

“The data warehouse has done a great job of centralising information, powering dashboards and enabling leaders to make better decisions. But when organisations needed to expose analytics directly to their customers, the limits of warehouse-centric approaches quickly surfaced. Latency became a sticking point, storage costs rose sharply, and hybrid or on-premise deployments were often out of reach,” said Chen. “That is, until the data lakehouse stepped in as a credible alternative. By combining the scalability of data lakes with the performance characteristics of analytical databases, cloud-native lakehouses have brought flexibility and efficiency to customer-facing workloads. Crucially, when powered by an open query engine such as StarRocks and paired with – again, cloud-native technologies – such as Apache Iceberg as the table format, the architecture becomes performant, open, and interoperable. It is a model that organisations at petabyte scale are already proving in production.”

The operational truth is stark

Ari Zilka, CEO and founder at MyDecisive thinks that cloud-native battleground in 2025 is all about value extraction. He states that “the operational truth is stark” i.e. Kubernetes reigns supreme, having crushed legacy PaaS platforms like Cloud Foundry and triggering a final, urgent migration wave. This consolidation of container orchestration is forcing enterprises to double down on smarter platform engineering.

“However, runaway complexity and cost threaten to derail mass enterprise success. The modern observability stack has become an exorbitant black hole, delivering insufficient value for its exorbitant cost and demands a fundamental rethink of data management. Simultaneously, the data lakehouse gamble failed, proving too complex and expensive. The imperative is clear and necessitates pulling workloads back from the brink with democratized data management to pull workloads back onto central platforms,” said Zilka.

Kris Kang, head of product for AI and cloud at JetBrains wants to talk about cloud-based agents. He proposes that, increasingly, we’ll see enterprises use teams of agents that work alongside human teams. Done correctly, this will unlock significant productivity gains. That said, there are lingering questions in the way of achieving that ideal: How do these agents work with one another and what’s the consequence for the humans in the loop? How do the agents interoperate with human tooling? How do enterprises prevent security risks and bad agent behaviour? 

“These remain unsolved problems that we will see more companies grappling with as fleets of remote agents are adopted,” advised Kang. “Most enterprises will realise they are at risk of relying on one LLM provider and, to lower overall AI costs, they will have to use open LLMs or build their own purpose-built models. The consequence of such a decision is that enterprises will now have to build their own datacentres and/or procure GPUs from cloud providers (where supply is already scarce). Availability of these GPUs, their longevity as chips improve and their fungibility across different types of workloads (besides serving LLMs), are key to achieving ROI on compute spend.”

He also says that in data storage, as LLM and agent use increases, the amount of data for auditing and personalisation will explode. This means we should expect to see exponential growth in the amount of data being generated and stored in the cloud, far exceeding that which has been collected, stored and used throughout the pre-AI era.

Embedded self-service, governance & sustainability

Benjamin Brial, founder at Cycloid.io says that the cloud-native shift is now less about new container tooling and more about how organisations are embedding self-service, governance and sustainability into platform engineering. 

“First, developer experience is key; providing abstractions that let teams move safely and with more velocity is a core driver. Second, hybrid/multi-cloud and GitOps-driven infrastructure are now mainstream. We’re seeing more businesses expect the Internal Developer Platform (IDP_ to offer choice and modularity, not just one stack. Third, I think cost-efficiency (FinOps) and sustainability (GreenOps) are growing as core pillars. Cloud-native means scalable, yes, but also efficient and measurable. The winners are the ones who will combine control and clarity with speed,” said Brial.

Nigel Douglas, head of developer relations at Cloudsmith says that compromised dependencies are “intentional weapons” that operate invisibly deep in dependency trees, which are then executed automatically by build systems. 

“We’re seeing that more with high-profile attacks such as with recent npm incidents like ‘Shai-Hulud‘ and ‘PhantomRaven‘. We expect focus to shift beyond CVE scanning, to active malware detection when it comes to safeguarding the cloud-native software supply chain with the OpenSSF Malicious Packages project already providing essential real-time intelligence,” he said. 

Also suggesting that as Kubernetes crosses the threshold into AI-native orchestration, it is also the case that Kubernetes is maturing fast, with Dynamic Resource Allocation (DRA) now GA in v1.34. DRA enables integration with AI hardware like GPUs, TPUs and NICs, setting the stage for AI-powered workloads to run efficiently and securely within Kubernetes production environments.

Hang on, what comes next?

Yasmin Rajabi, COO at cloud management & optimisation platform company CloudBolt says that cloud-native technologies have hit the point of maturity where the question is no longer just about moving to containers, but “what comes next” as the most pressing concern i.e. how do we scale; how do we deal with Day Two problems… and how do we improve operational efficiency?

“The focus has shifted from how quickly I can deploy, to how I can get a handle on costs and how resilient my platform is to changes or outages like we saw recently with AWS. Teams are recognising the overhead these technologies have introduced for developers and are centralising that work. We’re seeing more platform teams set best practices, use tooling to enforce them and move from “adoption mode” to “operational excellence,” said Rajabi..

James Urquhart, field CTO and technology evangelist at AI orchestration company Kamiwaza AI says that enterprises are still run on software that was developed over the course of years, or even decades, in architectures that derive from client-server patterns in the 1990s. He says AI does little to change that, though there are signs that new deployment architectures may be coming.

“This is important because it emphasises the need to have a variety of deployment and operations options for cloud-native development. Everything from virtual machines to containers to functions are necessary for packaging and running software. Data stores ranging from simple shared file systems to relational databases to more esoteric options that are slowly becoming mainstream (like vector and graph databases) are necessary to meet current needs and innovate new ones,” explained Urquhart.

Exposing the fragility of cloud

Kevin Cochrane, CMO at alternative hyperscaler Vultr says that recent events have “exposed the fragility” of today’s cloud infrastructure and underscored the need for resilience. He states that enterprises are realising that “multicloud isn’t enough” if it simply means multi-vendor contracts. 

“Rather, they need true infrastructure resiliency, built on distributed architectures that can sustain operations when a single provider or region goes down. The most prepared, forward-thinking organisations are reconfiguring their infrastructure for autonomy and redundancy, building systems that can self-heal and rebalance workloads anywhere. This marks a shift from cloud-native as a deployment model to cloud-native as an operational mindset, one rooted in resilience, sovereignty and control,” said Cochrane.

Nick Heudecker, head of market strategy and corporate development at Cribl thinks that there are “two clear trends” that are emerging as cloud-native grabs more enterprise IT budgets. For Heudecker, the first is coping with the overwhelming amount of telemetry data coming out of cloud-native architectures and the difficulty in deriving business insights from it. He says that the second is that platform engineering is “essential to accelerate developer productivity” and enforce guardrails as developers race to be cloud-native.

Don’t go chasing framework waterfalls

Mike Kelly, CEO of Bindplane insists that as cloud-native grows, AI will matter, but only if it’s built on solid plumbing. He proposes, therefore, that the real innovation right now is in the cloud operational layer i.e. connecting telemetry, automation and governance so AI systems actually have trustworthy data to act on. Kelly says the companies winning in cloud-native aren’t “chasing new frameworks” today; they’re standardising how data flows between clouds, tools and teams. We’ve entered the era of intelligent infrastructure, where the smartest thing in a stack might be the pipeline.

“The biggest cloud-native trend of 2025 is the realisation that data has become the new lock-in. Cloud portability doesn’t matter if your observability and security pipelines are tied to a single vendor’s data model. We’re seeing rapid movement toward open standards like OpenTelemetry and shared control planes that decouple data collection from analysis. The future of cloud-native will be defined by how organisations move, reduce and route their telemetry, not just how they run containers,” said Kelly.

He states that cloud-native today isn’t about how many microservices you can spin up, it’s about how well you can control them. “Organisations are waking up to the cost, sprawl and observability challenges created by years of enthusiastic adoption. The next phase is about consolidation: unified data pipelines, portable telemetry and smarter automation that make multi-cloud environments manageable again. The most forward-thinking teams are re-architecting for simplicity, not just scale,” he added.

Resilience is still a work in progress,

Ajay Khanna, CMO at Yugabyte, says that enterprises rely on the flexibility and scalability of distributed cloud-native architectures to meet growing application demands. 

“If we learned something from the recent AWS and Azure outages, resilience is still a work in progress,” said Khanna. “Resilience must be built into the architectural design and cannot be added as an afterthought. Multi-region, multi-cloud and inter-cloud approaches with elastic scalability and ultra-resilience are the way forward.  As organisations expand across hybrid and multi-cloud environments and integrate AI into their operations, maintaining data consistency, performance and resilience has become increasingly complex. In 2025, the cloud-native landscape reflects this evolution, emphasising AI-native, data-driven systems that operate seamlessly across diverse cloud infrastructures to ensure business continuity, optimise costs and comply with data sovereignty requirements.”

Cloud-native in 2026?

Looking forward then, there appears to be wide consensus on areas including Kubernetes standardisation, the need to embrace platform engineering, spiralling complexity, the presence of sporadic fragility and brittleness still pervade and (obviously, no spoiler alert is needed) the impact of AI penetration inside modern IT stacks is having a spillover effect into almost every other aspect of softwar application development and data science. Will all these standards still be with us in 2026? Yes, some, mostly, maybe… plus a few others is the most prudent guess. 

There may be only one certainty: cloud-native will be the natural native norm for SaaS deployments in every sphere. Let’s spin another one up for luck.