SUSE sets out to prevent a wave of DIY AI failures

SUSE sets out to prevent a wave of DIY AI failures

Built-in observability and an AI inference engine are coming to SUSE Rancher Prime and SUSE AI. Many AI plans are at risk of failing due to a lack of return on investment, but the new tooling from SUSE should prevent this problem.

The expansion of SUSE’s cloud-native platform thus focuses on the management of AI workloads. The exact environment does not matter: on-premises, cloud, or hybrid. According to SUSE CPTO Thomas di Giacomo, the container will also dominate AI, just as it has conquered the cloud world. The same insights into costs and usage that are available for cloud usage need a counterpart for AI workloads. If not, the IDC prediction that 65 percent of all self-conceived agentic AI projects within organizations will fail by 2028 seems likely to come true.

Universal MCP proxy simplifies management

The new release includes an integrated Model Context Protocol proxy, currently still in tech preview. This proxy simplifies connections with central management of MCP endpoints. It streamlines costs associated with AI models and improves data access management. SUSE has previously added MCP components to SUSE Linux Enterprise Server 16 and is therefore busy implementing this standard in its own portfolio.

The extensive range of inference engines includes popular platforms such as vLLM. These ensure fast, efficient, and scalable LLM inference. In addition, the release offers ready-to-use observability for Ollama, Open WebUI, and Milvus via Open WebUI pipelines. This allows SUSE to cover a large part of the implementations used, especially for organizations that really take matters into their own hands. Despite the plug-and-play nature of tools such as Ollama and the Open WebUI interface, they are fairly basic; linking models to data still requires a lot of manual work on top of such software.

Rancher Prime gets contextual AI agent

SUSE Rancher Prime has also been expanded with Liz, which SUSE calls a context-aware AI agent that simplifies Kubernetes management. This enables teams to proactively detect problems and optimize performance. Liz is available in tech preview. Users can now also deploy virtual clusters. These help companies optimize costly GPU resources. The virtual clusters provide flexibility during every phase of the AI lifecycle.

The new version of SUSE Virtualization is also available in tech preview. It offers advanced networking functionality such as microsegmentation. The solution decouples network functions from physical hardware to support software-defined networking.

Starting today, SUSE Observability will have a new dashboard editor. Users can visually organize operational data to share practical insights. The new version also supports the popular OpenTelemetry framework.

Strategic partnerships strengthen AI governance

SUSE is collaborating with multiple partners to implement its AI offering. It has entered into partnerships with ClearML and Katonic for MLOps and GenAI, AI & Partners for AI governance and compliance, Avesha for GPU orchestration, and Altair for HPC and AI solutions, among others.