AI requires mature choices from companies

AI requires mature choices from companies

The rapid rise of AI is putting pressure on organizations to review their infrastructure and working methods. Whereas the focus used to be on facilitating traditional IT workloads, AI requires mature choices in terms of scalability, data management, and governance. This new reality is forcing companies to adapt their technology and take their strategy, processes, and responsibilities to a higher level. A roundtable discussion with experts from AWS, NetApp, Nutanix, Pure Storage, Red Hat, and SUSE sheds more light on this topic.

Organizations must make choices that suit their unique situation, where compliance, security, and flexibility go hand in hand. It is a process that combines pragmatism and vision. AI requires innovation and maturity in decision-making. However, this new phase is not only technically challenging but also organizationally complex. AI puts companies in a position where they must take responsibility for their data, models, and AI agents, just as they do for their human employees.

The demand for scalable AI infrastructure

During the roundtable discussion, Marco Bal of Pure Storage emphasized that organizations are increasingly looking for a service-like experience in AI projects, including for on-premises infrastructure. “Customers want a model where on-premises infrastructure offers the same flexibility as the cloud. So scalable, quick to deploy, and without long delivery times,” Bal explains. This approach helps companies better cope with the unpredictability and rapid developments that AI brings.

According to Bal, factors such as performance, availability, and energy consumption are crucial. Organizations are struggling with how to set up their infrastructure as efficiently as possible while maintaining the flexibility needed to respond to changing demands in the coming months and years. Mitigating risk is, therefore, becoming an important part of a mature AI strategy.

Experimentation requires flexible choices

Twee mannen zitten aan een vergadertafel met papieren, glazen en naamplaatjes voor zich, in een kamer met gedessineerde gordijnen en een raam met groen buiten.
From left to right: Ricardo van Velzen (Nutanix) and Felipe Chies (AWS)

Ricardo van Velzen of Nutanix emphasizes that many organizations are still in an experimental phase with AI. They want to explore and test first before making any major investments, he explains. He recently spoke with a Dutch university that started with a pilot in the cloud but wants to return to on-premises infrastructure due to the sensitivity of the data. This illustrates how dynamic and unpredictable the journey to mature AI implementations is.

The university started in the cloud, where they could easily experiment, but quickly discovered that not all data could be stored securely in the public cloud. “They are now struggling with the question of which data can remain in the cloud and which must be kept on-premises,” says Van Velzen. This requires infrastructure that is scalable and can switch seamlessly between cloud and on-premises without hindering innovation.

According to Van Velzen, this example shows that organizations can no longer choose between cloud or on-premises as extremes when it comes to AI, but must embrace a hybrid approach. The key here is flexibility: ensuring that you can switch, experiment, and scale up quickly. At the same time, you must comply with the strictest security and compliance requirements.

Elasticity for an agile AI infrastructure

According to Felipe Chies of AWS, elasticity is the key to a successful AI infrastructure. “If you look at how organizations set up their systems, you see that the computing time when using an LLM can vary greatly. This is because the model has to break down the task and reason logically before it can provide an answer. It’s almost impossible to predict this computing time in advance,” says Chies. This requires an infrastructure that can handle this unpredictability: one that is quickly scalable, flexible, and doesn’t involve long waits for new hardware. Nowadays, you can’t afford to wait months for new GPUs, says Chies.

The reverse is also important: being able to scale back. Organizations want to be able to estimate future needs, for example, for new functionality. But often no one knows exactly whether such a feature will catch on. You need to be able to experiment, scale up, but also scale back without incurring unnecessarily high capital costs. That flexibility, being able to grow, shrink, and switch quickly, is essential for the pace and success of AI projects. Especially with AI agents, which are being used more and more, the ability to be flexible with infrastructure is more important than ever.

Anticipating change

Een man met een bril en een grijs overhemd zit aan een houten tafel met papieren en glazen voor zich, in een kamer met grote ramen en gordijnen die bomen buiten laten zien.
Ruud Zwakenberg (Red Hat)

Ruud Zwakenberg of Red Hat also emphasizes that flexibility is essential in a world that is constantly changing. “We cannot predict the future,” he says. “What we do know for sure is that the world will be completely different in ten years. At the same time, nothing fundamental will change; it’s a paradox we’ve been seeing for a hundred years.” For Zwakenberg, it’s therefore all about keeping options open and being able to anticipate and respond to unexpected developments.

According to Zwakenberg, this requires an infrastructural basis that is not rigid, but offers room for curiosity and innovation. You shouldn’t be afraid of surprises. Embrace surprises, Zwakenberg explains. If you look at the long term, you see patterns recurring that have been around for decades. That’s why it’s important to build on an architecture that will work just as well tomorrow as it does today. That flexibility translates into the ability to switch between cloud and on-premises environments and hybrid models, so that companies can always make the best choice.

With such a future-proof approach, Zwakenberg argues, organizations are better equipped to anticipate new challenges and opportunities. “If you know that you are in AWS today, can be on-premises tomorrow, and working hybrid the day after tomorrow, you keep all your options open. That is the power of a well-designed AI infrastructure. It not only provides room for growth and innovation, but also for resilience in an unpredictable future.”

Data movements back and forth

Twee mannen zitten aan een houten tafel met naamkaartjes, kopjes en glazen voor zich in een goed verlichte kamer met grote ramen en een spiegel.
From left to right: Eric Lajoie (SUSE) and Pascal de Wild (NetApp)

Pascal de Wild of NetApp emphasizes the importance of flexibility in data movement within AI infrastructures. “The ability to move your data to where you need it, whether to a cloud partner or on-premises, gives organizations the freedom to respond quickly to new developments such as agentic AI,” he explains. This ability to switch between different infrastructures ensures that companies are not locked into a single solution, but can choose what best suits their specific needs and compliance requirements.

De Wild emphasizes that this agility is crucial for scaling AI initiatives and adapting them to the ever-changing technological environment. He also points out that it is difficult to predict exactly what will happen next year. If your infrastructure allows you to respond dynamically, you will be in a stronger position to seize new opportunities. The right strategy for data management and infrastructure choices, therefore, forms a foundation for sustainable growth in the world of AI.

The strategic risk: sovereignty

Towards the end of the roundtable discussion, Eric Lajoie from SUSE raises another point for discussion. He points out the importance of clarity around data sovereignty in AI implementations. “When customers talk about sovereignty, I always first check what that means to them,” he explains. “For some organizations, it means they want absolute control without a ‘kill switch’, a possibility for an external party to block or disable the service remotely. If that’s the case, they often want a solution that runs entirely on-premises and remains outside the sphere of influence of third parties.”

Een man in een blauw shirt zit aan een tafel met drinkglazen, waterflessen en een Coca-Cola fles voor een raam met gedessineerde gordijnen.
Marco Bal (Pure Storage)

Lajoie continues that the concept of sovereignty is not always straightforward and can vary from one organization to another. “Some are satisfied if data remains within their own country, while others demand complete control, without any external access or influence. This also has major implications for the choice between SaaS solutions and on-premises infrastructure.”

Another complex issue, according to Lajoie, is combining data sovereignty with multi-tenancy, for example, in public organizations that do not want their data to be stored alongside data from other parties. He cites a local police department as an example. They cannot accept their AI platform being shared with the public due to data management risks. This requires customization and infrastructures that are carefully tailored to the specific requirements of each company.

Balancing innovation and manageability

The rapid developments in AI and the increasing complexity of data management require a new mindset within organizations. Flexibility, scalability, and control are no longer luxuries, but crucial prerequisites for successfully responding to the changing technological and business reality. This means that companies must not only invest in modern infrastructures, but also in the right strategies and processes to implement AI responsibly and effectively.

In addition, there is no single uniform approach. Every organization has its own unique combination of compliance requirements, security requirements, and innovation needs. Embracing hybrid and flexible solutions, in which cloud and on-premises infrastructure complement each other, is proving to be a practical and future-proof choice. This agility allows companies to experiment, scale up, and manage risk at the same time, even in an era of uncertainty and rapid change.

Ultimately, it is about organizations finding the right balance between control and innovation, between stability and agility. By strategically managing data movements and designing the infrastructure to be modular and elastic, they are better prepared to seize the opportunities offered by AI without losing sight of the risks. In this way, AI becomes not just a technology of today, but a sustainable foundation for the organization of tomorrow.

This was the last story in our AI infrastructure series. Be sure to read our first article and second article, in which the six experts also share their views.