Dynatrace platform extension provides observability for Large Language Models (LLMs) and generative AI-powered applications.
The Dynatrace platform supports companies with Application Performance Monitoring to ensure software performance. At the Perform conference, however, the company unveiled new capabilities for the platform. “It includes an end-to-end AI stack, including infrastructure such as Nvidia GPUs, fundamental models such as GPT4, semantic caches and vector databases such as Weaviate, and orchestration frameworks such as LangChain,” Dynatrace explained at the launch. There is also support for platforms widely used for training models, such as Microsoft Azure OpenAI Service, Amazon SageMaker and Google AI Platform.
Insights AI Observability
The AI Observability service uses Davis AI from Dynatrace. This AI service can predict application anomalies and analyze observability and security data. Davis AI also includes a copilot component that helps create queries, notebooks and dashboards.
Combined with Dynatrace’s other technologies, AI Observability should give companies a complete picture of AI-powered applications. The insights enable them to identify performance bottlenecks and their root causes automatically. In addition, AI Observability provides insights into token consumption, a component that models use to process queries. AI Observability also shows predictions about this token consumption.
Finally, AI Observability provides capabilities to track the origin of the output that apps create accurately. This allows companies to comply with privacy and security regulations and governance standards better.
Dynatrace AI Observability is available immediately.
Tip: Dynatrace brings observability to serverless architectures