AWS is still the biggest hyperscaler, even though the gap with Microsoft and Google isn’t as big as it used to be. With the advent of AI, the battle for cloud dominance went into overdrive, also because cloud spend keeps on going up strongly too. What does this mean for AWS? Can the company keep up with the other two of the ‘Big Three’, as it plotted a different course than them. We sit down with Martin Elwin, Technology Director for Northern Europe at AWS to discuss this.
Elwin, who speaks to many customers, has seen the big impact of AI in practice. It has become the dominant driver of capacity needs. It is also the reason why organizations need to fundamentally rethink their approach to AI adoption. “Every customer I talk to is interested in improving how they get value out of AI,” he says
AI Factories address infrastructure challenges
AWS recently announced AI Factories to help customers who want to control their hardware while still benefiting from managed services. AI Factories is, in other words, an on-prem offering specifically aimed at running AI. Elwin also points out that many companies have discovered that simply buying GPUs and placing them in data centers doesn’t deliver AI value. “Getting the value out of it, actually getting the services on top of it is not so trivial,” he explains. That is something AI Factories also addresses. “You need that managed service layer on top of it so that you can make it easily accessible to your own organization and actually run workloads on it.”
The AI Factories offering targets regulated industries and organizations that need infrastructure in their own environments but lack the expertise to efficiently utilize it. AWS provides the managed services layer to ensure high utilization and easy accessibility.
Business value must drive AI adoption
Elwin sees organizations are repeating mistakes from early cloud migrations by treating AI as purely a technology problem. “You should make it a technology problem and just say to your engineering department, move our service into the cloud or just implement some AI solution,” he says. “You need to start with, what is it that actually brings us business value?”
In other words, AI adoption is a board-level question, not just an IT department responsibility, according to Elwin. Each business leader needs to determine how AI can help them do more with existing resources. AWS wants to help customers address this with frameworks developed within Amazon. By using them, they can answer critical questions about the value being created.
Drawing on Amazon’s experience across logistics, devices, and retail, AWS brings specific learnings to customers. For supply chain optimization, Elwin notes: “We’ve done extensive work in how to think about applying AI models to optimize supply chain. These are things that we bring to customers and give advice around.”
Also available as audio-only Techzine TV Podcast
Subscribe to Techzine TV Podcast and watch and/or listen to our other episodes via Spotify, Apple , YouTube or another service of your choice.
European Sovereign Cloud
The European Sovereign Cloud (launched last January) represents AWS’s response to sovereignty concerns while maintaining full cloud capabilities. Unlike some competitors’ offerings, AWS built it as a complete region using the same architecture as commercial regions but with critical differences.
“We had to do all of this proper due diligence across all of the services to ensure that they are isolated into the European sovereign cloud,” Elwin explains. This includes separate certificate authority infrastructure, DNS infrastructure, identity and access management (IAM), and key management services (KMS), all disconnected from other AWS regions.
However, Elwin emphasizes that most customers will continue using standard commercial regions built with a “sovereign by design” approach from the beginning, featuring the Nitro system to ensure even AWS cannot access customer data. The European Sovereign Cloud provides an additional option for specific regulatory requirements or risk mitigation scenarios.
Whether the AWS European Sovereign Cloud is enough to convince people that their data is safe from prying eyes, remains to be seen. There’s a rather substantial undercurrent in Europe of distrust in technology from outside of the region. Some of the arguments used in the debate are very valid, some of them less so, in our opinion. However, perception very often equals reality, so the veracity of all the claims being made isn’t always easy to establish.
Production AI requires new capabilities
For organizations moving AI solutions into production, AWS has Amazon Bedrock AgentCore capabilities including evaluations to address critical operational challenges. “One challenge that I have seen with organizations who want to take AI solutions actually into production, is that it’s not always trivial doing that,” Elwin said. “You need to think about a number of things like security and scalability and how you monitor it.”
Customers specifically asked how to detect drift in production AI solutions, i.e. when model performance degrades over time. AgentCore evals help track quality both before production deployment and continuously in production. Elwin also highlights scenarios where AI models get adjusted by vendors without clear explanation or become deprecated. This creates potential disruptions without proper test suites.
Amazon Bedrock AgentCore also includes security agents and DevOps agents, working alongside Security Hub which collects security findings from underlying services. This addresses the challenge of managing multiple security agents and tools in complex environments.
Regulated industries are ideal AI candidates
Contrary to common assumptions, Elwin argues that heavily regulated industries like banking are actually better positioned for AI adoption than more “free” industries. “They usually have a very good understanding of the data they have and how they’re processing it”, he says.
Rather than blanket restrictions on AI use, Elwin recommends regulated organizations take a granular approach. Detailed regulations like NIS2 and DORA are actually helpful because they provide clarity about what’s acceptable versus creating uncertainty with no regulations.
Continuous testing becomes critical
The shift to continuous AI model updates, potentially multiple times per day, requires organizations to move beyond traditional software development practices. While continuous testing represents best practice in modern software development regardless of AI, the stakes are higher with AI models.
AWS’s Kiro (formerly called Amazon Q Developer) provides agentic coding capabilities, security agents, and DevOps agents to help manage this complexity. The autonomous agent functionality allows developers to use it like a colleague, asking it to perform tasks while maintaining security and infrastructure management confidence.
All in all, AWS thinks it has some compelling arguments for organizations to invest in its infrastructure when it comes to AI. Both on the hardware and software side, it offers expansions of what it could offer before. Whether that is enough to also make AWS’ strategy a success, is up for debate. Especially Google Cloud seems to be growing very rapidly recently, with a very broad AI offering across its entire portfolio.
The cloud market as a whole will continue to grow very quickly too, with a record spend of over 500 billion dollars projected for 2026. AWS will get a substantial chunk of that investment, that’s for sure. AWS has remained more solidly in the infrastructure part of the AI stack than the others have. That is, it focuses on the physical underlying infrastructure and the application infrastructure that Developers use. It may have to look up into the stack too if it wants to keep the healthy lead it has now.