“AI is an application, treat it that way in your IT infrastructure”

The added value of agnostic hybrid cloud

“AI is an application, treat it that way in your IT infrastructure”

Artificial intelligence will work optimally when you consider it as an application in an agnostic hybrid cloud model. So argues Carrie Carrasco, Director Hybrid Platform Specialist at Red Hat.

IT infrastructure has become quite complex. This is because companies use many different environments and resources, each very good at what it needs to do on its own. There is something to be said for each kind—from public cloud to private cloud and from on-premises to the edge—simply because it is the most logical and desirable choice in a given situation.

Red Hat wants to enable this infrastructure landscape through the agnostic hybrid cloud model. This model aims to bring the best of the data center, private cloud, and public cloud together and leverage their potential. “This is the key for not only traditional applications but also for modem applications and companies that want to do more with AI,” Carrasco said.

All types of applications

So, Carrasco sees AI as an application. It can be compared to mission-critical software. Both are at the root of processes, customer interactions, and decision-making. With the application, you want to emphasize scalability, reliability, and integration into the hybrid infrastructure. You can only achieve those characteristics by applying the same handling principles. A mission-critical application must be structured with clear objectives, planning, and a solid implementation strategy.

Carrie Carrasco of Red Hat

So, are there no differences at all between AI and mission-critical software? We shouldn’t go that far, Carrasco agrees. Companies should ensure that infrastructure is available for the application of AI so that models can be effectively trained and implemented. Other things are essential for mission-critical software, such as more dependence on GPUs instead of CPUs. In any case, many resources suddenly have to be added to respond to needs, such as through servers and storage systems. This is where Red Hat responds in its product offerings through OpenShift AI. The OpenShift AI platform is optimized to enable companies to create and deliver AI-based applications at scale in hybrid cloud environments.

The agnostic hybrid cloud

In this philosophy, adopting a scalable IT environment is crucial. That way, developing, testing, and deploying essential applications is supported according to modern standards. For example, you can then set it up so that you train computationally intensive AI models in the public cloud where on-demand resources are available. Meanwhile, you keep sensitive data and critical workloads in the private cloud for optimal security and regulatory compliance.

The hybrid cloud also facilitates bringing AI applications into production. As much as possible, management is done in one environment, regardless of where the underlying resources reside. This ensures an efficient and coherent development and operational workflow. Data scientists, engineers, and application developers can collaborate on the same platform. They can then run the application in any environment they choose, i.e., on-premises, the private cloud, or the public cloud.

Carrasco sees in this an agnostic hybrid model as the future. And as far as she is concerned, that model is also based on three core components: Exploration, Trust, and Resilience. Those three things could be applied to any type of application. So, below, we will focus specifically on the three steps in an application’s life phase.

Dia met de titel "The Childhood" met als ondertitel "Exploration: Choosing your adventure, with guardrails." Er staat een illustratie op van een persoon die achter een computer werkt. Het Red Hat-logo in de hoek.

It starts with experimentation

The whole life phase starts with exploration. When we relate this to AI, it involves exploring various AI models, supporting platforms, and automation tools. There’s an awful lot of choice there; the key is to determine how AI can best support business objectives. During this phase, hypotheses are tested, usage scenarios identified, and technological feasibility evaluated before large-scale implementation.

An exploration phase allows organizations to quickly experiment with different AI technologies and applications without significant investments or risks. Proof-of-concepts (POCs) and pilot projects enable companies to understand AI’s potential benefits and challenges. This approach helps set realistic expectations and identify promising AI solutions that can be further developed or, conversely, identify where visible results can be achieved quickly.

Organizations that master Exploration correctly have an edge in innovation, according to Red Hat. To do this, you must know what type of AI application is preferred. Is predictive analytics, natural language processing, or automated decision-making most desirable? Or does your company need a combination? The exploration phase provides time to determine the right path and an environment for testing and refining AI models before integrating them into broader business processes.

Building reliability and transparency

While the benefits of AI are now clear to many, the technology also comes with the challenge of building trust. Organizations must be able to rely on the accuracy, ethics, and transparency of AI systems to ensure acceptance and effective implementation. Ideally, sufficient safeguards are built for validation, monitoring, and minimizing bias. This only becomes more important with sensitive subject matter – for example, legal documentation and financial forecasts. Then, domain-specific knowledge and guidelines should be taken into account.

Een dia met de titel "Adolescentie" met als ondertitel "Vertrouwen: de verantwoordelijkheden van volwassen worden". In de hoek is het Red Hat-logo te zien en er staat een schild op afgebeeld.

Transparency in AI systems can help when aiming for a high level of Trust. This includes providing insight into the decision-making processes of AI models and ensuring the traceability of data and decisions. By promoting transparency, organizations can ensure accountability and compliance with legal and ethical standards.

After all, creating trust in any application is essential. This will only become more important with AI apps coming out of the exploration phase and into production for them to succeed. To do that, you must build safeguards, which the Red Hat Trusted Software Supply Chain enables. As a result, companies should bring applications into production with confidence.

Read more about the Trusted Software Supply Chain in our earlier article.

A resilient infrastructure for applications

Once trust is established and people start using the application, the final phase of Carrasco’s life cycle begins. That is achieving Resilience, or a level of resilient infrastructure so the application can keep running. The AI system must continue functioning under all circumstances – despite disruptions such as cyberattacks, technical failures, or changing operational conditions. There are obvious things to take care of in this area, such as disaster recovery and monitoring to anticipate potential disruptions. While we call this obvious, it does not necessarily mean every company has this well arranged. The technology is available, but implementation sometimes lags behind in practice.

“You have to automative anything you can automate”

Red Hat sees a lot of use in this phase, in particular, for implementing automation in the infrastructure. This, in turn, is where the Red Hat Ansible Automation Platform fits in. Once an observability tool detects that an event is happening, Ansible can kick off a playbook to trigger certain actions. Carrasco sees this as a crucial component for the agnostic hybrid cloud, implementing as much automation as possible throughout the organization and keeping it running efficiently.

Een presentatieslide met de titel "Emancipatie" met de ondertitel "Veerkracht: omgaan met mislukkingen". In de rechteronderhoek staat een "Red Hat"-logo.

Therefore, an agnostic hybrid cloud model optimized for any application (AI, legacy or modern) is the future. By treating AI as an application and using a hybrid infrastructure, organizations can innovate and deliver robust and reliable solutions that meet modern IT standards. They are thus ready for a future with a variety of applications.

Tip: 33-year-old Linux is a staple of IT infrastructures