8 min Analytics

“Blind AI deployment leads to knowledge loss and software failures”

“Blind AI deployment leads to knowledge loss and software failures”

Artificial intelligence is rapidly being integrated into business processes, driven by promises of efficiency and cost savings. However, there is a serious warning from the industry. CEO Kurt Jonckheer, CTO Bruno De Bus, and CPO Dirk Van de Poel of platform engineering company Klarrio argue that the current uncontrolled adoption of AI has a dangerous downside. Companies risk losing their human expertise, creating unmanageable security risks, and becoming dependent on a handful of tech giants. “Every technology has positive aspects, but it’s about how you use that technology.”

To understand the current AI hype, Jonckheer begins by drawing a direct parallel with the massive cloud adoption of the past decade. Ten years ago, organizations blindly migrated to the cloud, attracted by the promise of flexibility and lower costs. However, this often happened without a solid exit strategy or a fundamental understanding of the impact on their own architecture.

“What we hear today from many large cloud users is that the consumption model simply no longer works for them,” Jonckheer notes. “It’s all becoming far too expensive. After years of blindly following the trend, they are realizing that they can no longer afford it.”

The mistake is now threatening to repeat itself with AI, but with potentially greater and irreversible consequences. Whereas companies were still able to partially return to on-premises solutions or implement cost optimizations with cloud services, the impact of AI on the human knowledge base is much harder to reverse.

Expertise evaporates through blind use

A major and insidious danger of AI adoption that Klarrio sees lies in the loss of human capital. When employees structurally rely on AI to generate code, write documents, or analyze data, the development of their own critical skills stops.

Marketing promises suggest that one employee with AI can do the work of six juniors. But if those six junior positions are never filled again, how will an organization train the senior experts of the future? “Over time, you simply lose your expertise,” warns Jonckheer. “If no one has the knowledge anymore to apply an objective ruling to what the AI generates and only highly talented staff can interpret it, you won’t get very far. You need knowledge to be able to verify the output.”

This extends across entire professional communities. The emergence of AI tools trained on Stack Overflow information plays an important role in this. It was once a lively place where developers exchanged knowledge and nuances, but interaction is declining. Many developers now find quick and immediate answers through AI assistants. Context and nuance are at risk of being lost, and, more crucially, knowledge and content are hardly being built up anymore.

But the crisis is also being felt in education. Teachers are faced with an impossible task. A medical teacher illustrated this problem to Jonckheer. More than half of the assignments submitted today are generated with AI. When generating this type of detailed and scientific work, hallucination is disastrous, which is precisely what often occurs with the generative AI used. As a result, teachers now have to correct AI-generated hallucinations on an individual basis. This only creates more work, rather than reducing it.

The ‘black box’ and inevitable software failures

AI systems excel at generating content that sounds plausible, but what they generate is not always safe and correct. Human verification, therefore, remains necessary. This leads to a verification crisis. In the pre-AI development era, a team of developers wrote 10,000 lines of code manually. Now, AI can produce 100,000 lines of code in a short period of time. It is impossible to check such enormous volumes of code in a reasonable amount of time. “Software is becoming a black box,” Jonckheer predicts. “Something has been generated. No one understands exactly what it is, what it does, or how it works. It is then quickly rolled out, without really knowing what it does.” He therefore predicts massive “software failures” within the next five years, especially when automated code goes into production unchecked.

According to Jonckheer, De Bus, and Van de Poel, the security implications are quite alarming. Malicious actors are already actively exploiting the blind spots in automated processes. AI bots systematically manipulate readme files on platforms such as GitHub to artificially increase the popularity of malicious open-source frameworks. Last year, Klarrio identified 2,400 GitHub repositories containing ransomware in a short study. Inattentive developers who blindly trust AI suggestions are pushing these vulnerabilities directly into production software.

Een persoon met een capuchon typt code op een laptop met een donkere achtergrond en toont regels kleurrijke programmeertaal op het scherm.

European sovereignty under pressure

For Europe, this loss of knowledge also entails major geopolitical risks. The continent is structurally lagging behind America and China in the AI race and has been struggling with an overloaded data center market for years. The expected demand for AI will require tens of gigawatts of additional capacity by 2030.

Dependence on American and Chinese tech giants is growing exponentially. Jonckheer cites a recent study showing that the three major players, Amazon, Google, and Microsoft, earn €260 billion annually in Europe, with virtually no tax payments. “On the one hand, they are driving everyone crazy, causing jobs to disappear, and on the other hand, they are raking in huge profits and contributing nothing to society.”

Jonckheer paints a bleak picture for European independence if this trend continues: “Less control over data, less control over infrastructure, less control over the cloud. If, on top of that, you also lose control over your own knowledge base, you won’t need to look for a plan B within the European Union within ten years. Because who would you build that plan with?”

Forgotten ecological and social toll

The CEO, CTO, and CPO of Klarrio point out that the environmental costs of AI are not sufficiently taken into account in the discussion. This is despite its enormous energy consumption. While organizations have been working on sustainability for years, AI threatens to undo these gains.

The calculations are confronting. Previous statistics show that a standard Google search consumes as much energy as leaving a 60-watt light bulb on for 13 seconds. The gentlemen at Klarrio indicate that an AI query (such as via ChatGPT) consumes at least ten times as much energy. The actual consumption ultimately depends on the model used and the query’s complexity. In any case, it can lead to unforeseeable additional CO2 emissions every year. “Multiply that by millions of mindless searches by 12-year-olds on the tram,” Jonckheer points out sharply. “Then you’ve already negated the output of an entire wind farm. We are squandering our primary needs through practices like this.”

In addition, the social consequences of automation are far from clear. Although many employees are concerned about job losses, proponents often claim that AI will create new jobs. Jonckheer is skeptical about this: “The idea that every job that disappears will be replaced by a new one is nonsense. We are heading for an increase in the non-working population with an already rapidly aging population.” According to the CEO of Klarrio, this raises difficult questions about the financing of the welfare state, healthcare, and pensions, especially when profits flow to parties that pay little or no local taxes.

The way forward: Controlled use

Despite its strong criticism, Klarrio is not advocating a ban on AI. The technology offers fundamental advantages when used correctly. The key to successful AI use, however, lies in the word ‘control’.

“There is no question that AI offers advantages,” Jonckheer acknowledges. “But it must be used as a tool within a controlled process, not as a substitute for humans.” This means that organizations must embrace strict principles. This can be achieved by implementing safety guards in development processes, so that it is always possible to objectively assess whether the generated output is correct and safe. Protecting one’s own expertise is just as important. Companies must safeguard their human verification capacity and prevent junior positions from simply being cut. After all, these young talents form the indispensable basis for tomorrow’s senior experts.

Klarrio also emphasizes the importance of quality over pure quantity. Although AI can generate a standard website or code block at lightning speed, genuine innovation and creativity still require human insight. The drive for maximum efficiency should not lead to digital uniformity. Finally, Klarrio believes that there needs to be much broader awareness of the immense ecological and structural impact of blind AI adoption. And not only in the workplace, but also in education and society.

The European AI Act, which will largely come into force in August 2026, can partly guide this responsible adoption. However, according to the gentlemen, regulation alone is not enough to prevent the erosion of knowledge. This requires a conscious, strategic choice by companies themselves.

According to Klarrio’s reasoning, organizations are at a crossroads. Those who now use AI purely to reduce costs and replace people in the short term will pay the price later through security incidents, lost knowledge, and impossible exit strategies. Klarrio’s warning is therefore a call for common sense: use technology to empower people, not to relinquish control. Or, as Jonckheer sums it up: “A fool with a tool is still a fool.”

Tip: Klarrio uses open source expertise to build foundational data platforms