Datadog Experiments enables teams to conduct A/B testing and product experiments directly within the Datadog platform. With this, Datadog combines business metrics, product analytics, and application observability into a single platform for the first time.
The new product addresses a persistent problem in product development that Datadog has identified: teams having to combine multiple standalone tools to validate experiments. Think of a separate tool for product analytics, an experimentation platform, and a monitoring solution, each with its own data, its own dashboards, and its own blind spots. That fragmentation costs time and leads to incomplete decisions.
Acquisition of Eppo as a foundation
Datadog Experiments is built on the acquisition of Eppo, a specialized experimentation platform. The integration combines Eppo’s statistical methods with Datadog’s real-time observability. This allows teams to link experiments to Real User Monitoring (RUM), Product Analytics, APM, and logs. The platform directly compares results with business metrics from native data warehouses in a way that is repeatable and auditable.
Datadog Experiments revolves around three core features. Experiments are self-service and standardized, allowing teams to move quickly from insight to decision. Built-in guardrails and real-time feedback help teams detect issues early and keep experiments statistically valid. Results are reproducible because impact is measured directly against the company’s business metrics in its own data warehouse.
Datadog emphasizes that the product is particularly relevant now that AI is accelerating software development. “AI has increased the pace and complexity of software releases exponentially. Too often, though, teams are flying blind when it comes to measuring the efficacy of new code,” said Chief Product Officer Yanbing Li.
Datadog Experiments is now generally available.
Tip: Datadog prevents chaos when rolling out new features in applications