Rapid AI adoption increases risk of shadow IT

Rapid AI adoption increases risk of shadow IT

More than 80 percent of the world’s largest companies now use AI in software development, but many organizations have little control over its use. Microsoft warns that shadow use of AI is becoming a serious security risk.

This is evident from Microsoft’s recent Cyber Pulse Report , which was published in the run-up to the Munich Security Conference. According to the report, AI programming assistants are now used by more than 80 percent of Fortune 500 companies. Adoption is rapid, but clear frameworks and specific security measures are lagging behind in many cases.

Microsoft explicitly refers to a growing gap between innovation and security. While AI agents are rapidly spreading within organizations, less than half of companies have specific security controls for generative AI. At the same time, 29 percent of employees use unapproved AI agents for work. This creates a new form of shadow IT, referred to as shadow AI.

Shadow AI refers to the use of AI applications without the knowledge or approval of the IT or security department. Employees independently use external tools or autonomous agents to perform tasks more quickly. What starts as an efficiency gain can result in a structural blind spot in security. In such cases, IT departments do not know which systems are active, what data is being processed, and what access rights have been granted.

Well-designed governance is important

According to Microsoft, an important risk lies in the speed with which AI agents are rolled out. Rapid implementation can undermine existing security and compliance controls. When organizations do not take enough time to set up governance, there is an increased risk that agents will be given too much authority or have access to sensitive information without appropriate supervision.

According to the company, the risks are not purely theoretical. Microsoft’s Defender team recently identified a fraud campaign in which attackers used a technique known as memory poisoning. This involves deliberately manipulating the memory of AI assistants, thereby structurally influencing outcomes. This underscores that AI systems themselves can become an attack vector if they are not adequately protected.

An additional point of concern is the danger of overprivileged agents. Like human accounts, AI agents can have broad access rights to multiple data sources and applications. If an agent is compromised or misdirected, it can lead to large-scale data leaks or abuse. Microsoft also warns that an agent with too much access or incorrect instructions can turn into a digital double agent.

To mitigate these risks, Microsoft advocates a zero trust approach to AI agents. Each agent must be explicitly verified, access rights must be strictly limited to what is necessary, and activities must be continuously monitored. In addition, the company recommends maintaining a central registry that records which AI agents are active within the organization, who owns them, and what data they have access to. Unauthorized agents must be actively tracked down and isolated.

The rise of AI within organizations seems irreversible. The challenge lies not in stopping innovation, but in introducing it in a controlled manner. Without clear governance, transparency, and appropriate security measures, shadow AI threatens to become a structural and difficult-to-manage risk within the modern IT environment.