4 min Applications

Wiz: AI is infrastructure, even if not everyone realizes it yet

Wiz: AI is infrastructure, even if not everyone realizes it yet

Despite ongoing doubts about the exact details of AI adoption, the technology has now become a core component of cloud infrastructure. This is evident from the Wiz Research report State of AI in the Cloud 2026. AI tools mitigate exploits, but they also create a larger attack surface.

AI is no longer an experiment in the cloud, the study shows, but it is often not clearly mapped out. At least 81 percent of the analyzed cloud environments use managed AI services, an increase from the 74 percent reported by Wiz researchers in early 2025. At least 90 percent run self-hosted AI software. AI has thus become deeply intertwined with development workflows, orchestration layers, and production systems.

Remarkably, AI users are by no means always the conscious owners of the technology. Of the organizations running self-hosted AI models, 68 percent do so at least partially via models included in third-party software. Eighteen percent even rely exclusively on such “transitive” components. Organizations thus inherit AI functionality through vendors and integrations, without this being intentionally designed. Wiz’s own 2025 Security Readiness Survey already showed that 25 percent of respondents had no visibility into which AI services were running in their environment. That lack of visibility has now become a structural governance problem, according to Wiz.

This should come as no surprise to anyone, though the exact figures help clarify the problem. Previous research showed that uncontrolled AI adoption increases the risk of shadow IT, with 29 percent of AI agents not officially approved.

AI is piling up without visibility

AI-driven development is now the norm. At least 80 percent of organizations use AI IDE extensions, and 71 percent have one or more AI copilots in use. Incidentally, copilot adoption typically proceeds bottom-up, driven by individual developers without a centralized policy. This creates so-called “shadow AI” pockets in development environments that fall outside the reach of security teams. Given the changing pricing models, this can be a costly mistake, aside from the security risks.

Those risks are now just as palpable as the financial strain caused by AI. Back in September, Wiz Research found that at roughly one in five organizations using AI-powered code-generation platforms, applications were affected by structural security issues. These flaws were not the result of individual errors, but of recurring patterns and default settings that AI systems built upon. Wiz subsequently worked with platforms such as Lovable to raise security thresholds. It has also been shown previously that AI-generated code more often repeats the same insecure patterns, at the time in the context of open source and IP control.

In addition, AI agents and MCP servers are growing rapidly. At least 57 percent of organizations have deployed at least one self-hosted AI agent technology. MCP servers are present in at least 80 percent of cloud environments. Of that group, 5 percent have at least one MCP server that is directly accessible via the internet. This is a concrete threat that can only be mitigated with additional layers of security.

Attackers are taking advantage of AI assistance

AI not only expands the attack surface; it also lowers the barrier to entry for attackers. That has almost become a cliché, but it remains just as true. Wiz, in collaboration with Rubrik Zero Labs, documented malware that uses AI services to modify its execution during runtime. In the s1ngularity supply chain attack, malicious packages exploited installed AI-driven CLIs, including Claude, Gemini, and Amazon Q, for reconnaissance and the collection of login credentials.

Other research, specifically from the Censys platform, found thousands of publicly accessible Ollama instances. Wiz Research itself previously discovered the Probllama vulnerability (CVE-2024-37032) in Ollama, a critical flaw that enabled remote code execution. Google Threat Intelligence analyzed a campaign in which attackers exploited OAuth tokens linked to AI integrations to gain access to Salesforce environments and exfiltrate data.

At the same time, Wiz is working on the defense. Wiz emphasizes the impact of AI on runtime security, and recently built a multi-cloud platform on top of the Wiz Security Graph to detect AI-driven threats, regardless of the IT environment. Meanwhile, 70 percent of organizations view AI as the biggest data risk, with poor visibility into data paths as the primary concern.

Wiz argues that securing AI components should be given the same priority as any other cloud workload. This means there must be a focus on inventory, configuration management, identity governance, and exposure management. According to the report, the security question is shifting from “which AI provider do we use” to “which AI components are already running in our environment.” Many organizations will be surprised by how much shadow AI is still hidden in all their systems, even from providers they are already familiar with.