Red Hat sees AI and sovereignty reshaping hybrid cloud

Red Hat sees AI and sovereignty reshaping hybrid cloud

Red Hat is positioning its open-source platform as the foundation for companies navigating AI adoption and digital sovereignty. In a recent interview with Techzine, Chief Product Officer Ashesh Badani explained how the over-25-year-old open-source philosophy now applies to the AI era, where choice and control matter more than ever.

Red Hat has built its business on open source for over two decades. That philosophy now extends to artificial intelligence, where companies face new challenges. The market is flooded with large language models, hardware accelerators, and deployment options. Organizations struggle to choose between different GPU vendors, cloud providers, and AI models.

The company believes this mirrors earlier technology shifts. Just as multiple public clouds emerged after Amazon’s early dominance, Red Hat expects similar diversity in GPU environments and AI infrastructure. Companies will want options for where they run workloads, which accelerators they use, and which models they deploy, including smaller task-focused models that require less computational power

“Innovation doesn’t happen within the four walls of any one organization,” Badani explains. “It happens because of collaboration around the world.” This open approach aims to prevent vendor lock-in while giving companies flexibility. Red Hat points to its history with Linux and Kubernetes as proof that open standards create both innovation and operational efficiency.

Also read: Chris Wright: AI needs model, accelerator, and cloud flexibility

Sovereignty without clear definition

Digital sovereignty has become a significant topic, but defining it remains difficult. Badani points out that during an executive roundtable, participants couldn’t agree on a single definition. Some view sovereignty as control over data and applications. Others see it as a national or regional infrastructure requirement.

Red Hat encounters this ambiguity across markets. Europe shows particular interest in sovereign cloud solutions, but the Middle East, Asia and South America are also exploring similar concepts. The discussion revealed that companies know they want sovereignty, but many haven’t fully defined what that means for their operations. “One business leader in the room said sovereignty is about control,” Badani notes. “Control over my data, my applications, my processes, my environments.”

The lack of clarity creates challenges for vendors and customers alike. Without agreed standards, building sovereign solutions becomes complicated. Red Hat positions its open-source foundation as a way to address these concerns, regardless of how sovereignty is ultimately defined, by providing choice, portability across suppliers, and transparency and access to source code. 

Hardware partnerships expand AI reach

Red Hat earlier announced collaborations with major chip manufacturers to support diverse AI infrastructure. The partnerships with Nvidia, Intel and AMD aim to certify hardware for Red Hat OpenShift AI.

Nvidia’s partnership involves supporting NIM microservices on OpenShift AI. These inference microservices belong to Nvidia’s AI Enterprise platform. The integration promises streamlined deployment alongside other AI implementations.

Intel’s collaboration, in its turn, focuses on certifying Gaudi AI accelerators, Xeon processors, and Arc GPUs for OpenShift AI. The goal is to ensure interoperability across Intel’s product line for model development, training, and monitoring.

And then there is AMD, which joins through GPU Operators on OpenShift AI. This provides processing power for AI workloads in hybrid cloud environments. Red Hat argues these partnerships enable hardware choice while maintaining performance.

The company sees value in supporting multiple accelerator types. Companies currently experimenting with AI often don’t know which hardware will best serve their needs. Having certified options for Intel, AMD, and Nvidia chips gives organizations flexibility as requirements evolve.

Applications blur with AI integration

The line between traditional applications and AI applications is disappearing. Red Hat expects all its products will eventually be either AI-enabled or AI-assisted. Users will expect AI capabilities as standard, similar to how internet connectivity became the standard.

A development tool illustrates this shift. Integrated development environments have existed for 30 years. Now, tools like Cursor perform similar functions but get labeled as AI tools. The distinction becomes unclear when existing software adds AI capabilities. “Is that a traditional application or is that an AI application?” Badani wonders, highlighting the confusion around categorization.

Red Hat’s platform aims to support both scenarios. Some workloads will be purely AI-focused, like large language model deployment. Others will be traditional applications enhanced with AI features. The architecture needs to handle this spectrum without requiring separate infrastructure.

Of course, Red Hat is also stepping up its presence in agents here, which is seen as the natural evolution of cloud-native services. These autonomous systems will increasingly handle tasks that previously required human intervention. Red Hat sees agents as applications that happen to use AI, rather than a completely separate category.

Early days for AI adoption

Despite the hype, enterprise AI adoption remains in early stages. Companies are trying to determine which models to use, how to deploy them, and where regulations apply. The technology itself is still being defined. Red Hat reports having over 1,000 customer engagements around AI this year. Many organizations want to move from token consumption to token provisioning. Instead of paying external services for every API call, companies want to run their own inference infrastructure.

This shift requires platforms that can provide GPUs as a service, models as a service, and proper lifecycle management. Organizations need tools for monitoring usage, applying guardrails, and managing model versions. The company draws parallels to earlier technology transitions. Containerization and microservices faced similar challenges around management and deployment. The solutions developed for Kubernetes now apply to AI workloads, but are adapted for the specific requirements of models and inference.

Optimism versus caution

Attitudes toward AI ultimately vary between regions and organizations. The United States appears more permissive, encouraging experimentation. Europe shows more caution, with concerns about bias, security, and unintended effects of AI systems. This is all according to a recent Red Hat study.

It creates tension between the desire to innovate and the need to establish proper governance. Companies require evaluation frameworks, bias detection, and security controls. They also need policies that don’t completely stifle experimentation. “AI technology is here. It’s unlikely to go away,” Badani sees. “So the question is, how do we harness it for the greatest effect?”

Red Hat’s conversations with companies reveal mixed perspectives. Some see AI’s promise for productivity and new capabilities. Others worry about risks and implementation challenges. The open source giant positions its platform as a way to benefit from AI while maintaining control. Red Hat believes its AI framework will help remove uncertainty and enable AI success by enabling customers to run their workloads on any model, any accelerator, and any cloud. “We want to enable customers to get started while knowing that using Red Hat means they’re prepared for all of the innovation we expect over the coming years,” Badani said.

The discussion acknowledged that AI definitions continue to evolve. What counts as artificial intelligence versus standard automation isn’t always clear. As capabilities improve and reasoning models develop, new use cases will emerge that don’t exist today.

Platform approach for uncertainty

Red Hat’s strategy centers on providing a unified platform for AI and traditional applications. The OpenShift foundation supports containers, virtual machines, and now AI workloads. This consolidation aims to simplify infrastructure while giving organizations flexibility. The company added specific AI products, including OpenShift AI, Enterprise Linux AI, and the AI Inferencing Server. These tools handle different parts of the AI lifecycle, from development to deployment to inference.

By building on open source, Red Hat argues that companies retain choice and flexibility over software and hardware. If requirements change or better models emerge, organizations can adapt without replacing their entire infrastructure. The same platform that runs today’s workloads should handle tomorrow’s innovations. This approach answers the uncertainty in the AI market. Companies don’t know which technologies will dominate or how regulations will develop. An open, flexible platform provides insurance against making the wrong architectural bet.

Badani emphasizes that both innovation and efficiency matter. New capabilities need to work with existing investments. Companies can’t rip out everything to adopt AI. Platforms that bridge legacy and modern workloads while supporting diverse hardware will likely see adoption. Red Hat’s 25 years+ in open source position it for this transition. The same principles that drove Linux adoption now apply to AI infrastructure.