During Now & Next 2026 in Miami, IGEL announced a locally running language model via Ollama for IGEL OS. This is part of the AI Armor feature, which provides dynamic runtime security. AI Armor also relies on a central policy engine in the Universal Management Suite (UMS) to meet compliance requirements.
Ollama plays a central role in enabling local AI. This open-source platform is built for downloading and running LLMs on an endpoint. It’s useful, for example, when an employee receives a priority email. They can then use Google’s Gemma 3 model to generate an appropriate response immediately. No cloud service is required, and there is no need to pay for tokens or a subscription. Nothing leaves the PC or the corporate network.
Administrators can use UMS to precisely determine which users have access to Ollama and which prompts are allowed. The use cases IGEL lists for this include text transformation, translation, summarization, code generation, and search queries. IGEL emphasizes that the push of AI to the edge is also driven by costs. The costs of cloud-based AI services can add up significantly for organizations.
AI Armor and Endpoint Governance
In addition to local AI, IGEL is further building on AI Armor. For example, IGEL now tracks which processes are seeking access to the device’s NPU or GPU. Administrators can selectively block this usage.
Only applications that have been approved and signed via the App Portal are allowed to run. Additionally, IGEL is developing a central policy engine within UMS featuring pre-built compliance templates for regulations. With a single click, an administrator can enable the correct settings for their industry, without everyone needing to know the specific compliance requirements.
Tip: IGEL brings ‘smarter Zero Trust’ Contextual Access to endpoints