Observability is the weapon against complex hybrid IT chaos

Observability is the weapon against complex hybrid IT chaos

Organizations’ digital dependence is growing by the day. Hundreds of applications and infrastructure components are active to support business operations. These often have to meet high standards. Performance and availability are directly linked to customer satisfaction and, therefore, to the success of an organization. That is precisely why observability is rapidly gaining in importance. We discussed the latest developments in this area with Jean-Bastien Kalis, Global Competence Center Lead Observability at Cegeka.

Jean-Bastien makes it clear that Cegeka is fully committed to expanding its observability capabilities. He himself joined Cegeka in 2023, when the Belgian company acquired his former employer Key-Performance. Since then, investments have continued. People and technology are considered key components, resulting in a specialized team around Dynatrace. Cegeka currently uses this platform for everything related to observability: from the data center to the end user. “Observability is not a buzzword, but a necessary step in how organizations maintain control over increasingly complex IT environments,” says Jean-Bastien.

No modern architecture without insight

To understand why observability is taking off, it is important to see the difference with traditional monitoring. Whereas traditional monitoring has been limited for years to servers, networks, and memory statistics, observability goes a step further. Monitoring mainly records what is happening, while observability shows why it is happening. It establishes connections between systems, shows how components interact with each other, and provides insight into the impact on the end user.

According to Jean-Bastien, that is where the core value lies. “It’s not just about numbers, but about context and causes. Observability reveals how processes behave and where the bottlenecks are.” That insight is crucial now that IT environments are increasingly being set up as hybrid or multicloud. Organizations no longer have to deal with just one data center or application, but with dynamic ecosystems that are constantly changing.

Initially, Cegeka mainly used Dynatrace as a component within its services. The technology quickly added value to the hosting services it has been offering for years. However, it is now being rolled out more widely to customers in a more comprehensive form. The Dynatrace platform can easily and automatically recognize different systems and applications. It offers the possibility to map the entire chain, from infrastructure to applications. Agents are placed on servers or virtual machines and continuously collect data.

“All services and requests between applications are mapped, so you can immediately see how the landscape is structured,” explains Jean-Bastien. At the same time, customization remains indispensable. The agent recognizes an application, but does not always understand which server names or processes are important for the business. This is exactly where Cegeka adds value by configuring the context and developing extensions for specific technologies, such as Teams calls or data from PaaS environments.

From reactive to proactive means control over complex IT

The question then arises as to how organizations manage such an observability platform. In theory, customers could take on the functional management themselves, but in practice this often proves difficult. The complexity of modern IT environments requires knowledge and capacity that is not available everywhere. Jean-Bastien outlines the dilemma. “If a customer had to employ someone full-time to manage everything, a capacity and knowledge problem would quickly arise. Many organizations therefore call on us to ensure continuity.” With a team of dozens of engineers, Cegeka can easily scale up, even during peak loads or holidays. In this way, they take care of the operational side of things, while customers retain the insight and reporting they need to bring their IT and business together.

In practice, this means that organizations can finally make the transition from reactive to proactive working. In the past, companies often only took action when users complained. With today’s technology, problems can now be proactively detected and nipped in the bud. For example, an organization without modern monitoring remains in the dark until a call center becomes overwhelmed with complaints. With observability, deviations can be identified much earlier, often before the end user is affected. This significantly reduces the turnaround time for problem solving and minimizes the chance of customer disruption. “The difference between reactive and proactive working can be a matter of months. Problems that used to take weeks or months to resolve are now sometimes solved within days,” says Jean-Bastien.

Practice proves its usefulness

That this is not just theory is evident from concrete cases shared by Jean-Bastien. At the Belgian branch of Carglass, for example, the internal booking system for windshield replacement was monitored intensively. This revealed which call centers were working more efficiently and where delays were occurring. Another striking example comes from the financial sector, where the entire chain of instant payments is monitored for a Dutch bank. Since every delay has a direct impact on the customer experience, this approach enables the bank to solve problems before they become noticeable to users. These are two striking examples that show that observability increases technical reliability and also contributes directly to the business and customer satisfaction.

What is possible and what is not?

It is important to emphasize that Dynatrace brings together different monitoring disciplines. It functions as a single platform that provides end-to-end visibility across the entire landscape, rather than a separate tool for each IT team with fragmented data. This means faster problem solving and better collaboration between departments. “The power lies in the integration,” says Jean-Bastien. “Instead of each department looking at its own silo, teams share the same insights and work from the same dashboard. This speeds up the search for causes and makes collaboration much more effective.” In hybrid and multicloud environments, where dependencies are more complex and less visible, such an integrated perspective is indispensable.

Nevertheless, there are limits to what observability can achieve. Legacy systems, such as monolithic applications in C++ or COBOL on mainframes, are difficult to instrument with modern agents. This poses a challenge in some sectors, particularly for banks that still rely heavily on older core systems. However, according to Jean-Bastien, this is a temporary problem. “The trend is moving towards more modern architectures that are suitable. Companies are increasingly migrating to the cloud or to microservices, which increases the possibilities for observability.” Even with older applications, valuable information can still be obtained by monitoring the front end and end-user experience. This allows for control to remain possible, even when access to the back end is limited.

Data provides decisiveness

As far as Jean-Bastien is concerned, the added value really becomes apparent when observability is used for optimization. By comparing performance before and after migrations, organizations can objectively determine whether new environments actually perform better. A Belgian insurer systematically applies this when transitioning from on-premises to the cloud. “We can demonstrate that performance is improving, thereby strengthening the business case for migration,” says Jean-Bastien. This data-driven approach makes it possible to make well-informed investment decisions and helps companies gain concrete insight into the value of digitization.

In addition, there is a basis for tracking incidents from the data center to the end user. Incidents are automatically detected and visualized, including their impact on users. This allows developers to quickly identify which service is responsible for a malfunction, enabling them to find the cause much faster. Furthermore, the system can review user sessions for 30 days, allowing you to determine exactly what happened during an incident. For organizations that depend on digital services, this means that searching for a needle in a haystack becomes a targeted and efficient investigation.

Observability as a strategic investment

Finally, let’s return to the business advantage we mentioned earlier. The platform can show which customer segments are experiencing problems, whether they are bronze, gold, or platinum customers. An organization can then set priorities and take targeted action where the impact is greatest. “Observability connects technology with business. It’s not just about CPU and memory, but about customer satisfaction and revenue. That makes it a strategic investment,” concludes Jean-Bastien.

With this vision, Cegeka is working on its ambition to serve and ultimately lead the observability market in Europe. The team of specialists, available 24/7, is growing steadily. In addition, the combination of technology and experience should help customers maintain control over their increasingly complex IT landscapes. “Our ambition is clear: we want to be the partner that helps organizations get a grip on hybrid and multicloud environments. Observability is not a luxury in this regard, but a prerequisite for continuing to operate successfully,” says Jean-Bastien.

Tip: Dynatrace launches AI Observability