3 min Applications

SUSE AI: a vision now, a product later

SUSE AI: a vision now, a product later

Unlike virtually every other IT company, SUSE is not yet presenting an AI product. What it’s now calling SUSE AI is a breeding ground for an open, secure and controllable AI deployment.

Pilar Santamaria, SUSE’s VP of AI, sees that AI represents a life-changing opportunity for businesses. However, its real-world application today requires capital that many lack. Those who can’t or don’t want to cede data to the public cloud are limited in their ability to fully make use of GenAI’s rapid development. SUSE wants to change that.

To that end, it is inviting customers and partners to participate in the SUSE AI Early Access Program, which is designed to pave the way for a secure open-source AI future for enterprises.

Co-innovate before there is a product

Santamaria believes sovereign AI is going to become a reality with SUSE AI. This includes “talking to your data,” as she puts it. That’s not a new line of thinking (or even a new phrase), but SUSE’s vision here is different to the current leaders in AI (i.e. the Googles and OpenAIs of this world). That vision revolves around openness, security and controllability. That sounds a bit vague right now, as one might expect when there’s no product to show off just yet. SUSECON 2024, held in Berlin, is however a great opportunity to launch the aforementioned AI Early Access Program.

84 percent of organizations are considering implementing their AI deployment with open-source resources. People are doing this because control over models, workloads, and data is critical, Santamaria believes. “If you’re not driving, you are the data.” To give everyone an equal opportunity, SUSE is enlisting the help of the open-source community. Through a collaborative effort, there should be a way to democratize AI. How exactly?

A few clues

We hear a few hints about what SUSE AI will mean, even though it is still in a development phase. We don’t get much wiser from the “human first AI” tagline that SUSE emphasizes. More concrete is the promise from SUSE that it can run anywhere, be it on-prem, in the cloud, or at the edge. Santamaria refers to synthetic data, too, which can supplement AI training data with fictitious information. This method’s party trick is that by using it, the ultimate LLM will still have the exact same functionality, but without any sensitive data that may leak out through a smartly engineered prompt.

Ultimately, an organization may simply go along with SUSE AI for a pretty straightforward reason: it’s probably cheaper to do so. Unpredictable AI token expenditure further drives cloud costs, which are hard to keep a lid on as is. Determining exactly where AI is running through SUSE AI should eliminate that problem. With its newly presented vision, SUSE aims to be a leading “open AI” platform that accelerates model and application development.

How SUSE AI will be turned into a tangible product will be revealed at KubeCon + CloudNativeCon North America on Nov. 12-15.

Also read: GenAI adoption lags far behind GenAI excitement