7 min Analytics

Dynatrace Perform live report: Re-mapping software intelligence

Dynatrace Perform live report: Re-mapping software intelligence

Software intelligence company Dynatrace used its annual ‘Perform’ conference this February 2023 to clarify and explain its company vision and define the direction of its automation observability roadmap. The overall message from the company is one of intelligent automated observability across the vast topographies of the modern cloud landscape. Combined with the ability to channel that intelligence via visualisations and enriched analytics functions – all with a dose of low-code for the BizDevSecOps cognoscenti – Dynatrace is on a mission to re-map the way we look at data in complex contemporary deployment scenarios.

Out to sea

But before we get to the products and services, let’s paint an illustrative picture. The suggestion here is that we’re all at sea. As the digital drive fuelled by the birth of cloud computing, the rise of mobile ubiquity, the AI renaissance and the era of edge and smart machines all now come together, some organisations are being left adrift.

Even the most forward-thinking businesses are generating more data than they can handle. It’s pouring out of their cloud estates in biblical proportions, often just spewing into unstructured data lake repositories. With apologies for swelling the seafaring waterborne analogies further, the data and information flood is only set to worsen as our continued reliance on the devices that keep us connected now pumps even more data into the digital seaways we seek to traverse.

Dynatrace is no stranger to H2O-related analogies, the company built and designed its Grail data lakehouse technology to be a solid hulled vessel designed to tame the data explosion currently causing a maelstrom on the high seas of business.

Back to land

Back on dry land, Dynatrace has engineered its technology proposition to now work as a platform capable of capturing logs, security data and also business ‘events’ (in the digitally encoded sense of the word, not as in annual conferences or Christmas parties), so in this case that could be events such as payment failures or sudden drop-offs in conversion rates. The Dynatrace platform can then process those events for insights, without being drowned (okay, sorry, one more, that’s the last one) in the deluge.

Dynatrace is now extending Grail to support metrics, distributed traces (from PurePath) and multi-cloud topology and dependencies (from Smartscape) and adding new capabilities to allow customers to raise custom queries using AI-powered graph analytics.

More answers than questions

Now, organisations can ask almost any question about their business and get answers based on reality, not best guesses or good old-fashioned hope. But what kinds of questions are we actually talking about here?

Why did that customer get all the way to checkout and then clear off before completing their purchase? Do I really need all those Kubernetes clusters to be up at 4am in the morning? Am I going to need a bigger cloud for all these data streams coming in? Those are the kinds of questions now being asked by systems engineering professionals.

Having all that data and all those answers is a good place to start, but Dynatrace says it is also introducing two new technologies to its platform to enable organizations to put them to work. The Dynatrace AutomationEngine uses a low-code & no-code (NC/LC) toolset to allow teams (both IT and business) to use Dynatrace to automate new workflows across the full spectrum of BizDevSecOps use cases.

“The ability to conduct exploratory, causal-AI-based analytics on petabytes of unified observability, security and business event data multiplies the value of this data for our customers,” said Bernd Greifeneder, founder and chief technical officer at Dynatrace.

Speaking at the Dynatrace Perform keynote in Las Vegas itself, Greifeneder explained that the company’s software intelligence platform provides end-to-end observability (right the way from the company datacentre to the computing edge) that aspect really defines what Dynatrace is today. The event’s core keynote session was initially presented by Dynatrace CEO Rick McConnell – this was followed by a session entitled ‘defying boundaries to drive success’ with Dynatrace chief marketing officer Mike Maciag and Dynatrace SVP of product management Steve Tack.

“We are not focused on tooling up an army of people sat in a network monitoring centre with information designed to point them to alerts when and where they happen,” said Dynatrace CEO Rick McConnell, speaking in Las Vegas. “No, we are focused on automating that process and enabling flawless and secure digital interactions, some of which will involve users, but some of which will be machine-to-machine.”

An IT-savvy CEO

Talking about the new world of data-driven operations, the Dynatrace chief spent a good period talking about how his firm’s platform is now looking to develop and deliver observability. As a CEO, McConnell knows his MTTR from his HTTP and is clearly no marketing lead or glorified management consultant that had made it to the top i.e. he understands what development teams are doing in relation to how they implement AIOps and new control layers for better IT operations.

When we look at the way the Dynatrace Applications & Microservices ‘module’ works, the company is now working to make sure it can work in serverless environments and has worked with the major cloud hyperscalers to make sure that it can deliver on that functionality. According to Tack and Maciag, observability really does mean being able to look at what is happening inside systems all the way out to the computing edge. 

The Dynatrace product family now spans Oneagent, PurePath, Smartscape, Grail & Davis AI. We know that this is a busy sector of the tech industry and the company has many to compete against in the cloud and data systems observability market, but Dynatrace is (arguably) among the more interesting (if not the most vocal) technology vendors that work in this space.

In terms of use during working practice, software engineers can work with Dynatrace and use Service Level Objectives (SLOs) to make applications self-heal if their performance drops below a pre-defined baseline. They can then make sure security alerts go straight to the right person to resolve rather than dampening everyone’s day and automatically spin up more cloud resources when their user base starts to outgrow the existing set-up.

Opening up the firehose

The new Dynatrace AppEngine offering allows software application developers and data professionals to use those same low/no-code approaches to build their own apps for the Dynatrace platform. That means they can start using all the observability, security and business data in Dynatrace for use cases that haven’t even been thought of yet, which opens up a torrent of potential use cases.

To demonstrate this, Dynatrace has already created some apps of its own, which are available to customers through its Hub. These include Carbon Impact – an app which allows organisations to use the observability data coming from their hybrid and multi-cloud environments to understand how they contribute to their carbon footprint, so they can see ways of improving their environmental sustainability by introducing efficiencies.

The company has also now come forward with Site Reliability Guardian, an app which helps teams to create automated quality and security gates in their software delivery pipeline so they can ensure software reliability never degrades below pre-defined thresholds. All these announcements may add up to one thing: a future that is more data-driven for all organisations, more intelligently automated and a whole lot more observable.