5 min

Dynatrace is a unified observability and security platform company with AI-powered capabilities for data analytics and automation. The company says it has now pledged to match the cadence of modern business information channels. As a branded product and service, Dynatrace Data Observability is designed to provide IT teams with business events data which will serve as ‘fuel’ in the company’s own Davis AI engine to help deliver business analytics alongside reliable ‘automations’ designed to shoulder previous human-centric workflow tasks.

When we build the virtualized, componentized, composable and sometimes commoditised network structures that typify a cloud computing deployment, we are rewarded with a highly functional and performant instance of IT, but we also end up with a computing system that exists in a box (okay, it’s a server cluster in a datacentre somewhere), meaning some or all of its processes, values and metrics are not always as visible to IT management as we might like.

The entire requirement to be able to ‘look inside cloud’ has given us observability, a formalized genre of IT itself with a toolset of technologies that includes Application Performance Management (APM), application telemetry and a plethora of traceability, diagnostics, root cause analysis and issue resolution technologies all of which are brought together via specialized reporting tools and dashboards.

But this is not news i.e. this stuff has been happening for more than the last quarter century throughout the contemporary era of cloud networks. What observability does next is to become more holistically unified and orchestrated so that every single part of the network is understood… and this is a process that relies upon automation accelerators provided by Artificial Intelligence (AI).

Matching the new cadence

As we have already intimated with our inevitably incomplete list of all the functions that now fall into the observability genre (diagnostics, telemetry etc.) Dynatrace Data Observability is a technology that has been engineered to serve different professional IT roles.

It provides data for business analytics teams, for data science engineers and for DevOps combination team units composed of developers + operations. It also serves Site Reliability Engineering (SRE) teams and security management, which in fairness mostly fall into the operations function, although Dynatrace clearly likes to call them out for some additional recognition and love.

Data cleansing, now wash your hands

Of course there’s not a huge amount of point in observing data unless we are also able to do something with it. In this regard, the new functions detailed here are said to complement the Dynatrace platform’s existing data cleansing (deduplication, eliminating false positives, detecting malicious sources etc.) as well as data enrichment capabilities (combining internal data sources with third-party data sources or analytics etc.) provided by Dynatrace OneAgent.

We mentioned third-party data there and there are functions here to help ensure firms use only high-quality data collected via other external sources, including open source standards (such as OpenTelemetry, an observability framework and toolkit designed to create and manage telemetry data such as traces, metrics and logs) and other custom instrumentation alongside Dynatrace’s own-built Application Programming Interfaces (APIs).

The company says that it enables teams to track the freshness, volume, distribution, schema, lineage, and availability of these externally sourced data to reduce (and hopefully eliminate) the requirement for additional data cleansing tools.

Fueling analytics & automation

“Data quality and reliability are vital for organizations to perform, innovate and comply with industry regulations,” said Bernd Greifeneder, CTO at Dynatrace. “A valuable analytics solution must detect issues in the data that fuels analytics and automation as early as possible. Dynatrace OneAgent has always helped ensure that the data it collects is of the highest quality. By adding data observability capabilities to our unified and open platform, we’re enabling our customers to harness the power of data from more sources for more analytics and automation possibilities while maintaining the health of their data, without any extra tools.”

As Greifeneder suggests, high-quality data is obviously critical for organizations in the digital age, especially when they rely on it to inform, direct and control business and product strategies or optimize and automate processes. 

However notes Greifeneder and team (and as we proposed at the very start of this story), the scale and complexity of data from modern cloud ecosystems, combined with the increased use of open source solutions, open APIs and other customized instrumentation, make it hard to achieve this goal. Why? Because cloud is essentially an enterprise implementing a distributed data architecture, so there’s a wider plain of information coverage to traverse. He proposes that by adopting data observability techniques, organizations can improve data availability, reliability and quality throughout the data lifecycle – a living existence that sees data move from the point of ingestion to analytics and automation. 

Dynatrace Data Observability works with other core Dynatrace platform technologies, including Davis hyper-modal AI (i.e. more than one type of AI, so therefore a combination of predictive, causal and generative AI capabilities) for business.

A 7-point observability checklist

As we look to the future use of cloud insight technologies of this kind, C-suite management in need of an observability gauge might do well to consider a seven-point checksheet that includes:

  • data freshness
  • volume
  • distribution
  • schema
  • lineage
  • availability and…
  • lifecycle

Data freshness helps ensure data relates to up-to-date inventory, stock, current services and employee capabilities. Understanding data volume helps predict and plan for customer IT service use cases that might cause anomalies or failures if undetected. Knowing about data distribution enables firms to monitor for patterns, deviations or outliers from the expected way data values are spread in a dataset. Schema tells us what our data structure is and helps map information relationships, lineage delivers precise root-cause detail into the origins of data and what services it will impact downstream and availability helps us alert on abnormalities such as downtime and latency.

Spoiler alert, Dynatrace calls out those exact seven attributes as part of its platform’s infrastructure observability capabilities, all of which may now form part of an enterprise’s approach to a data lifecyle.

Cloud will always be cloudy to some degree, just because it is virtualised and abstracted at its core, but observability is there to help us look inside if we want to adopt the tools on offer.