Cloud computing has now entered its mature adolescence i.e. it’s still surprisingly developmental, changeable and occasionally irrational in some areas, but overall it’s certainly old enough to know better and should really start behaving properly. With the debate between public and private cloud now long over and the hybrid norm now (mostly) a de facto standard for typical deployments, multi-cloud itself is still an oft misunderstood state of being, with FinOps constantly berating us for waste and inefficiency.
Public cloud today is a place where the hyperscalers understand that operational efficiency is a two-way street i.e. customers must shoulder some of the burden for provisioning excellence, but cloud service providers must make that burden a licensable, manageable reality. We will still likely see disagreement over what constitutes an industry-specific cloud instance, what cloud-native means in the AI-native code arena and into next year – perhaps most of all – what the shape of sovereign cloud initiatives should look like.
A suitably opportune time to have this discussion is now, with AWS re: Invent staged in Las Vegas this month. What do cloud practitioners think about the state of cloud hyperscalers in 2026?
Let’s start with a selection of some of the cloud service updates and new products being tabled by AWS itself. The company used its annual practitioner event this year to detail Frontier Agents, a new class of AI agents that are said to represent a step-change in what agents can do. They’re autonomous, scalable and they span three core types: Kiro autonomous agent, AWS Security Agent and AWS DevOps Agent.
Kiro autonomous agent acts as a virtual software application developer, AWS Security Agent acts as a security consultant and AWS DevOps Agent is an on-call operations (i.e Ops) team.
Amazon Bedrock AgentCore
AWS also announced new developments in Amazon Bedrock AgentCore, the company’s platform for building and deploying agents. Because organisations need robust controls to prevent unauthorised actions taken by agents, Amazon Bedrock AgentCore is launching its “Policy” service in preview, allowing teams to set clear boundaries for agent actions using natural language. Also here, AgentCore Evaluations simplifies quality monitoring with 13 pre-built evaluators for dimensions like correctness and safety, continuously sampling live interactions to trigger alerts when performance drops.
Additionally, AgentCore Memory introduces “episodic functionality” (that’s a story in its own right, but basically this means agents can learn from past experiences) and AWS also announced Amazon Bedrock’s largest expansion of new models to date; the company is adding 18 fully managed open weight new models and reinforcing its commitment to offering a selection of managed models from AI providers. This includes four Mistral AI models, available first on Amazon Bedrock, from Mistral AI. With this launch, Amazon Bedrock now provides nearly 100 serverless models, offering a broad and deep range of models from leading AI companies.
AWS also announced EC2 Trn3 UltraServers, powered by the advanced Trainium3 chip, delivering 4.4x more compute performance, 4x greater energy efficiency, and nearly 4x more memory bandwidth than its predecessor. Let’s also mention the fact that Amazon S3 Vectors is now generally available, enabling AI systems to store and query vectors natively in S3 with scale and efficiency. Continuing down the newsreel, AWS AI Factories provides enterprises and government organisations with dedicated AWS AI infrastructure deployed in their own datacenters, combining Nvidia GPUs, Trainium chips, AWS networking and AI services like Amazon Bedrock and Amazon SageMaker AI.
Teachings from Swami
Dr Swami Sivasubramanian, VP, AWS agentic AI used his day two keynote address to go through news announcements, including simplified model customisation functions. New Amazon Bedrock and Amazon SageMaker AI capabilities cut advanced model customisation workflows from months to days, accelerating AI development and bringing new solutions to market faster.
Then we come to Kiro powers, a new way for AI agents to access specialised expertise without overwhelming context windows. Plus also Amazon Bedrock AgentCore Momentum: Amazon Bedrock AgentCore addresses developers’ challenges with a fully-managed platform that supports any framework while handling the unique demands of agentic AI.
Although AWS is (almost without argument) the most open, accessible, vocal and communicative of all the cloud service provider (CSP) hyperscalers, we do of course listen to Microsoft Azure, Google Cloud Platform and perhaps even Oracle. Going wider still, the rest of the enterprise software platform industry has plenty to say on cloud, so how can we assess the state of the cloud nation in 2026?
Shining a light on ‘shadow cloud’
Catalin Voicu, cloud solutions engineer at N2W says that cloud outages are driving the industry towards a shadow cloud future. She thinks that as hyperscaler outages continue to make headlines, organisations are losing patience with “all-in-one” cloud dependency… and that 2026 will accelerate a shift toward smaller, regional clouds and multi-cloud strategies.
“Organisations will also start relying on secondary, lesser-known cloud providers as hidden backups, creating a shadow cloud layer. Hyperscalers will feel the pressure, scrambling to reassure customers with aggressive marketing campaigns, new reliability guarantees and promises of ironclad SLAs,” said Voicu.
She suggests that the rise of invisible clouds and predictive failover will also be key: As organisations lose trust in even the largest hyperscalers, the next wave will include clouds that don’t rely on human intervention. “Adopting invisible cloud architectures where workloads automatically fail over across different providers, regions, or even private clouds will emerge. AI tools will help predict outages before they happen and reroute traffic and data in real time. As a result, businesses will stop caring about which cloud they’re using at any moment,” said Voicu.
La grande dame de l’ouverture
Amanda Brock, CEO at OpenUK says that what’s changed today around cloud is the impact of geopolitics and concerns about sovereignty, meaning that nation states now want assurance not only that the infrastructure that the platform providers use is open source but also that they are not creating lock-in around that infrastructure, whether at the service layer or in the surrounding tech.
“This need to demonstrate that there is no potentially crippling dependency is going to see significant shift in cloud products and the marketplace,” said Brock. “As an example, we’re seeing the UK proactively build its first hyperscaler in NScale. We can be clear that it will be built on open source infrastructure, same as the US hyperscalers.”
Wim Stoop, senior director at Cloudera says that geopolitical instability, regulatory shifts and a renewed focus on local innovation are accelerating demand for sovereign cloud services… and this trend is set to grow next year.
“Organisations cannot sit idle. On one hand, the hyperscalers are investing, especially in sovereign solutions… and on the other, the EU AI Act makes it easier and lower cost for customers to move elsewhere. However, this is only where their data is concerned,” said Stoop.
“It’s an issue for organisations as they can switch their storage hyperscaler faster and cheaper, but the workloads are a different story. Without the right technology solution, it’s a costly exercise due to the refactoring and architecting. It’s not enough to have data portability. Workloads must be equally portable. Freedom to choose what to deploy where without compromising control and in full compliance with ever-changing regulations is the key to success,” he added.
Stoop says that today, in 2026, and beyond… hyperscalers and businesses must work together to bring AI to the data, anywhere it lives. Only then will organisations be able to capitalise on the innovation and insight enterprise AI has the potential to provide.
The convergence of AI & cloud
Charlie Doubek, global VP of cloud & security services at NTT DATA says that far from just another passing technology trend, the convergence of AI and cloud is the disruption that redefines the digital era.
“Cloud gave organisations speed, scale, and resilience; AI adds intelligence, automation and adaptability. Together, they form a platform that streamlines operations, fuels innovation, accelerates decision-making and opens doors to opportunities we’ve only begun to imagine. This is the start of a new chapter for CIOs, where the cloud becomes less of a utility and more of a growth engine – with humans firmly in the driver’s seat,” said Doubek.
He says that when AI agents work alongside the cloud, enterprises realise valuable business outcomes in daily operations. For instance, AI speeds up and simplifies cloud migration. Generative AI can extract data from existing environments and automatically reconstruct it within an organisation’s hyperscaler of choice.
“Additionally, standardised agentic AI protocols support secure, cross-platform communication between agents. This makes it possible to design multi-agent workflows that span different clouds… so an agent in Azure can collaborate seamlessly with one that’s accessing a dataset in Google Cloud Platform. In a supply chain scenario, this could enable real-time coordination between procurement, logistics and inventory agents across different cloud providers,” advised Doubek.
Kyle Campos, chief technology & product officer at CloudBolt Software says that cloud practitioners see hyperscalers increasingly treating operational efficiency as a shared responsibility.
“Automated, continuous optimisation will be table-stakes for managing hybrid and multi-cloud deployments, helping teams enforce governance, reduce waste, and keep pace with rapid infrastructure changes. At the same time, cloud-native in 2026 isn’t just containers or microservices – it’s about orchestrating the entire stack holistically, with ‘build,’ ‘manage,’ and ‘optimize’ workstreams collaborating across teams and clouds to meet both business and hyperscaler expectations,” said Campos.
Hybrid AI is now the default
The ‘cloud-everything’ era is coming to an end. Data gravity, sovereignty laws and inference cost control are drivers for on-premises and model-to-data architectures. This is the opinion of Justin Borgman, CEO, Starburst.
“Enterprises are realising that critical AI workloads need to remain close to their data, whether on-premises or in hybrid environments, to meet stringent requirements for performance, compliance and data sovereignty. As a result, DevOps and data teams will increasingly build intelligent, governed ‘AI factories’ inside the enterprise, integrating AI pipelines directly with existing systems rather than relying solely on public cloud services. This approach ensures organisations can scale AI responsibly while maintaining control over sensitive information and operational efficiency,” said Borgman.
In 2026 the demand for cloud and compute power is only going to increase, triggered mainly by AI, but also to support new SaaS platforms that cover every conceivable business function or use case. So says Pejman Tabassomi, field CTO for EMEA, Datadog.
“It never stops. But that constant accumulation of data and applications is driving up costs. With organisations increasingly adopting multi-cloud strategies, the need to identify and implement ways to optimise cloud costs is more urgent than ever. Our own research found that more than 80% of container spend is wasted on idle resources and the percentage of organizations participating in commitment-based discounts is decreasing,” said Tabassomi.
He says that with infrastructure spanning across AWS, Azure, and Google Cloud, organisations need to be able to understand which actions they can take that will result in the most cost savings.
“Cloud cost optimisation tools and strategies help firms understand how and where they’re spending in a multi-cloud environment, so they can ensure that costs align with business goals. It will allow teams to highlight idle or underutilised resources so they can optimise for cost efficiency while maintaining performance,” explained Tabassomi.
For example, he says, if a team is using Kubernetes in a multi-cloud environment, it could reduce costs by resizing the resources assigned to containers. To Tabassomi’s mind, cloud cost optimisation will be front of mind for everyone over the next year, which is why its important organisations have the ability to manage cloud spend across their hyperscaler environments.
Beyond general-purpose compute…
Taho CTO Michal Ashby is upbeat, but realistic about cloud in 2026 i..e he says that cloud computing may look mature now, but AI and ML workloads are exposing how unprepared the current model really is. The major platforms were designed for general-purpose compute, not the high-variance, latency-sensitive, resource-intensive patterns that modern AI demands.
“We’re effectively pushing training pipelines, inference fleets, and massive data flows through infrastructure that wasn’t built for it,” said Ashby. “If today’s cloud is a capable teenage driver, AI is the equivalent of handing that teenager the keys to an 18-wheeler fueled by a rocket engine. Yes, they can drive, but not with the control, instrumentation, or safety these workloads require.”
“The core issue is simple: cloud platforms optimised for broad availability and generic workloads haven’t evolved to meet the specialised requirements of today’s AI systems. Demand is already outpacing the underlying technology, and we’re nowhere near the peak. Even if the allure of AI assistants fades, the deeper industrial, scientific, and enterprise applications will continue accelerating. Current cloud offerings need to mature and evolve quickly or they will and should be eclipsed by the next generation of platforms.”
Marinela Profi, global AI strategy lead, SAS says that sovereign and hybrid AI architectures will rise. “Global enterprises will demand control over their data, models, and infrastructure. “Bring your own model” and “sovereign AI” setups – where companies run foundation models within their own governance and compliance boundaries – will become the default for regulated industries. In other words: the cloud stays, but the control shifts,” said Profi.
The state of cloud
For advice on the road ahead with all this said above, let’s turn to Mitch Ashley, VP and practice lead at Futurum Research for some much-needed foundational insight.
“Advances during 2025 reset expectations for what the next generation of applications will look like in 2026. Hyperscalers are now introducing agent development platforms, boundary controls, memory systems, evaluation pipelines and control planes that redefine how software operates inside the cloud. The shift is clear: the cloud is an execution environment for agentic processes, not just a place to run workloads,” said Ashley, with a knowing wry glance, during an in-person briefing session at AWS re: Invent in Las Vegas itself.
There’s much to take in here and we can see major themes surfacing spanning everything from cloud complexity (yes, still!), to containerisation to sovereign cloud governance to the all-pervading impact of AI across every aspect of cloud.
Will things be simpler and clearer in 2026? Only a fool would say yes, surely.
