3 min Devops

AI code undermines control over open source and IP

AI code undermines control over open source and IP

The rise of generative AI has transformed software development at record speed, and this acceleration is bringing with it ever greater risks. This is evident from the OSSRA Report 2026, an annual analysis of the state of open source software. While AI tools are lowering the barrier to development, the gap between speed and manageability is growing.

In just over a year and a half, AI code assistants have grown from an experiment to an integral part of modern development environments. They are driving strong productivity growth, but organizations are not keeping up with the associated security and governance issues. The report shows that the number of files and open-source components per codebase has increased significantly, causing complexity to grow faster than existing processes can handle.

This complexity directly leads to more vulnerabilities. For the first time, the average number of open-source vulnerabilities per codebase has more than doubled. Virtually all codebases examined contain security issues, often with a high or critical risk level. At the same time, attacks on the software supply chain are increasing, with attackers increasingly targeting the open-source ecosystem itself.

License conflicts are getting out of hand

Not only is security under pressure, but the legal risk surrounding open source is also growing rapidly. The number of license conflicts will reach an all-time high in 2026. Two-thirds of the codebases examined contain conflicting open-source licenses. In one case, there were even thousands of separate conflicts within a single codebase, underscoring the increased complexity of IP management.

A major cause is the way AI handles source code. AI systems generate code fragments derived from copyleft licenses without including the associated license information. This puts organizations at risk of unintentionally violating license terms. Because not all companies actively monitor AI-generated code, legal risks often remain hidden for a long time.

These concerns are widely shared in the sector. SDTimes points to other studies that show that AI also introduces new problems. Sonatype concluded that AI gives incorrect upgrade advice for open-source projects in more than a quarter of cases. Veracode found that AI introduced new security vulnerabilities in almost half of the programming tasks examined. This reinforces the view that AI-generated code can be structurally riskier than is often assumed.

Many of these risks remain invisible. Some of the open-source components do not come in via regular package managers, but via copied snippets, supplier integrations, or AI generation. This code does not always appear in manifests and therefore escapes traditional scanning and auditing tools, causing organizations to lose track.

Outdated components pose a risk

On top of that, there is a growing maintenance problem. Many commonly used open-source components have not been actively maintained for years. If new vulnerabilities are discovered in them, there is often no maintainer left to fix them. Organizations then have to take action themselves or accept the risk.

The overall picture is that the adoption of AI is outpacing the development of governance and control. While virtually all organizations use AI and open source, there is often a lack of structural transparency. With increasing regulation around AI and digital product safety, this situation is becoming increasingly untenable.

The classic model, in which software is released after delivery, no longer fits an environment in which code is constantly changing and automatically generated. The report makes it clear that organizations can only maintain control by continuing to invest in visibility, governance, and continuous control of their software supply chain. In the AI era, speed is no longer an advantage if control is lacking.