3 min

Tags in this article

, ,

Switching to the (public) cloud is no longer a goal in and of itself. Usually, a switch is prompted by the desire to modernise the application landscape further. This has made the overall IT landscape and the monitoring of its environments more complex. While it used to be sufficient to know whether an application was functioning correctly or not, it has become vital to map out the entire IT chain. There are highly innovative monitoring tools available on the market, but a tool alone is not enough. Organisations are often in the dark about what to do with the large amounts of data and (unnecessary) alerts that come with monitoring tools. As a result, they are vulnerable to data fatigue.

The application landscape of organisations is becoming increasingly crowded. At the same time, applications are hosted by more and more parties. A single business process can be facilitated by multiple, varying applications, each hosted by a different company. As a result, single overviews of how applications work cease to exist. The application landscape has to be viewed from multiple angles, regardless of whether said landscape consists of services, clouds or datacenters.

Holistic monitoring

In short: the technical IT landscape is changing. The prerequisites for a positive user experience — how a user feels that a (business) application, online workplace or website functions — change with it. Therefore, a more holistic approach to monitoring is no longer unthinkable in the modern application landscape. This is where next-gen monitoring, also known as digital experience monitoring, makes its appearance.

The goal of digital experience monitoring is to collect data on how the IT landscape acts in its diversity. Different types of monitoring are used, including real user monitoring, end user monitoring and synthetic monitoring. By correctly correlating the collected data, organisations can more effectively monitor where users are experiencing problems and wherein the landscape these problems occur. So, where do you start?

Value of data

To get real value from data, organisations must first have a good understanding of how to view data. Having an accurate picture of the application landscape is essential. If it is not clear what certain values or triggers represent within the ecosystem of an application, it is very difficult to effectively follow up with actions and predictions by attaching a value to said values and triggers.

Start by mapping the IT landscape: visualise that which the organisation has in place. Look beyond the internal IT department and consider, for example, the Internet connection in people’s homes or the mobile devices used for work. To do this, the IT department will have to walk through the landscape to form an understanding of what is happening and what is relevant.

Next, a correlation must be made based on triggers known to be important for application performance and user experience. This analysis must be made based on an organisation’s knowledge of the application, the underlying infrastructure, the ecosystem, and so forth. The consideration of these facets is absent in default monitoring tools. Determining which data should be correlated and how it should be correlated remains human work.

Data fatigue

The functional implementation of a tool for digital experience monitoring is crucial. Large amounts of data must be presented in a clear dashboard, to be analysed correctly. With the help of technologies such as machine learning, organisations can be increasingly sufficient in repetitively analysing large amounts of (monitored) data in a structured way.

The landscape, users’ wishes, and the ways applications are used are subject to continuous change. Therefore, a continuous improvement cycle has to be in place, ensuring that the right triggers keep being activated over time.

The sole act of turning on a monitoring tool burdens organisations with large amounts of data, triggers and alerts. The (random) collection of all this data introduces the risk of data fatigue. Sorting data is a continuous process, and it is difficult to predict how large a data set will become, let alone how often it will have to be consulted. A scalable cloud solution is necessary for success. By subsequently focusing on the metrics that determine success, the amount of data is effectively reduced – without leading to data fatigue.

This article was contributed by Ruben van der Zwan, CTO of Sentia Group. Follow this link for more information about the possibilities that Sentia Group offers.