Measure DevOps Metrics That Matter
May 19, 2020

Elysia Lock
Flux7

As the old adage goes, what gets measured gets done. Measurement is the key enabler to any DevOps transformation and yet it's an oft-neglected aspect of projects. Organizations struggle to get beyond the starting blocks when learning how to measure DevOps. As a result, in today's blog, I will share important DevOps metrics your team can use to get started on your journey to measuring positive change.

A common challenge I frequently hear from DevOps teams is that there is no clear starting place, no benchmark to begin measuring from. They ask, "If you don't know where you are starting, how do you measure improvement from that place?" My advice is to simply start. Start measuring and your yardstick will appear. You will see and be able to show improvement over time. Instead of measures like, "the release made it out to production," you'll start being able to report on metrics for DevOps that meaningfully impact the business.

Why DevOps Metrics Matter

DevOps metrics are important as they help inform data-driven decisions that can guide continuous improvement efforts. And, with the right measures, you can link DevOps improvements with measurable impact to greater goals like digital transformation efforts. DevOps Research and Assessment (DORA) group helpfully provides us with clear metrics to track and even more insights with its latest report, Accelerate State of DevOps 2019(link is external).

DORA's Research-Driven Guidelines

Over the past six years, DORA has worked to develop four DevOps measurements indicative of an organization's software delivery performance, and ability to meet its DevOps goals. This year the group has enriched its research by identifying the capabilities that drive improvement in each of these four key areas. Using DORA's four key metrics as a foundation, let's explore the options and tools available for gathering metrics in DevOps.

Deployment Frequency

This metric gauges the throughput of your software delivery process, telling you how often and how quickly new services or features are deployed to production. This measure tells you quite a bit about your process effectiveness. For example, if there are bottlenecks in the process, measuring deployment frequency will help you unearth them by asking key questions such as:

■ Are there unnecessary steps in the process or are these steps in the wrong order?

■ What can we automate?

■ Are we the right team to manage this part of the process?

■ Do upstream issues exist that affect our responsiveness?

■ Do we have access to the tools we need to ensure timely deployments?

Over time deployment frequency should remain even — or increase. In the spirit of continuous improvement, decreases or dips should be reviewed closely to identify (and remediate when possible) the root cause. DORA identifies elite performers as those able to deploy on-demand, multiple times a day. Conversely, low performers deploy once every one to six months.

Lead Time for Code Changes

Along with deployment frequency, this metric measures the throughput of the software delivery process. DORA recommends measuring the lead time for code changes from the point in time when code is checked-in to the point it is released. This measure can also help you gauge the efficiency of your processes, supporting system effectiveness, and the general capabilities of your development team. For example, lengthy lead times can unearth inefficiencies in the development process or deployment bottlenecks.

As your team becomes more familiar and efficient with its DevOps processes, you should expect to see your lead time for changes metric decrease over time. Elite performers lead time is less than one day whereas low performers need between one and six months.

DevOps Change Failure Rate

DORA flags the change fail rate as a measure of the quality of the release process. It gets to the heart of how many application or service changes, builds or deployments that create a service issue large enough that it requires remediation. The change fail rate would ideally be managed down to as close to zero as possible. And, indeed, all but low performers have a change fail rate between zero and 15%.

The IT ticket system is an effective tool for measuring fail rates, tracking for each change its success rate, the impact of the failure and any required remediation. For example, your ticket systems can report if an approved change led to a service outage that required a rollback of the change.

Time to Restore Service

Once a service-impacting incident is detected, how long does it take to remediate and restore the service? This question measures system stability. Naturally, you'll want to restore services as quickly as possible as the cost of service outages can be extreme to the business. A Fortune 1000 survey by IDC found that the average cost of an infrastructure failure is $100,000 per hour.

When it comes to this measure, DORA research finds a significant gap between elite and low performers. Elite organizations are able to restore services on average in less than one hour whereas low performers report taking between one week and one month. High and medium performers are able to restore service within a day.

If you issue tickets for system repairs, your ticket system should be able to report on time to repair service. Tracking this metric will give you a distinct trend line illustrating progress over time. This is just one way to measure this metric. Often, a look at the monitoring tools that come with your cloud resources will give you this information. In the best scenarios, failures are self-healing and take milliseconds to fail over.

Business-Impacting

While these four metrics are a very helpful starting place to measure DevOps improvement and success, it is absolutely critical that teams take the initiative to link these metrics to the business. For example, increased deployment frequency allows the DevOps team to address new customer requests faster, growing customer satisfaction. Tracking key metrics is important to the business and even more so if you can show the business how DevOps processes are driving improvement over time that directly impacts key corporate goals.

Some tools allow for value stream mapping which directly ties code changes to features released. In some cases, e.g. retail applications, you can directly tie new feature introductions with impact to revenue.

DevOps Dashboard

With these four key metrics in hand, you are now in a position to build a dashboard for ongoing tracking and reporting. Theirs is a range of commonly used DevOps metrics dashboard tools available — both commercial and open-source, suitable for most needs and budget.

DORA's four key metrics will not only allow you to show progress and highlight areas for improvement for the DevOps team but by using these four common key metrics, you will be able to benchmark your team against their peers for external validation of their progress. And, you'll have a genuine numbers-based response when your boss drops by to ask how the team is progressing. Most importantly, with this data in hand, you will be prepared to change course quickly when you don't see a benefit in something you have built, leverage the insights you gain from your experiments, and capitalize on your successes helping the business reach its ultimate goals.

Elysia Lock is a Solutions Architect at Flux7, a NTT DATA Company
Share this

Industry News

May 12, 2025

LambdaTest, a unified agentic AI and cloud engineering platform, has announced its partnership with MacStadium(link is external), the industry-leading private Mac cloud provider enabling enterprise macOS workloads, to accelerate its AI-native software testing by leveraging Apple Silicon.

May 12, 2025

Tricentis announced a new capability that injects Tricentis’ AI-driven testing intelligence into SAP’s integrated toolchain, part of RISE with SAP methodology.

May 12, 2025

Zencoder announced the launch of Zen Agents, delivering two innovations that transform AI-assisted development: a platform enabling teams to create and share custom agents organization-wide, and an open-source marketplace for community-contributed agents.

May 08, 2025

AWS announced the preview of the Amazon Q Developer integration in GitHub.

May 08, 2025

The OpenSearch Software Foundation, the vendor-neutral home for the OpenSearch Project, announced the general availability of OpenSearch 3.0.

May 08, 2025

Jozu raised $4 million in seed funding.

May 07, 2025

Wix.com announced the launch of the Wix Model Context Protocol (MCP) Server.

May 07, 2025

Pulumi announced Pulumi IDP, a new internal developer platform that accelerates cloud infrastructure delivery for organizations at any scale.

May 07, 2025

Qt Group announced plans for significant expansion of the Qt platform and ecosystem.

May 07, 2025

Testsigma introduced autonomous testing capabilities to its automation suite — powered by AI coworkers that collaborate with QA teams to simplify testing, speed up releases, and elevate software quality.

May 06, 2025

Google is rolling out an updated Gemini 2.5 Pro model with significantly enhanced coding capabilities.

May 06, 2025

BrowserStack announced the acquisition of Requestly, the open-source HTTP interception and API mocking tool that eliminates critical bottlenecks in modern web development.

May 06, 2025

Jitterbit announced the evolution of its unified AI-infused low-code Harmony platform to deliver accountable, layered AI technology — including enterprise-ready AI agents — across its entire product portfolio.

May 05, 2025

The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, and Synadia announced that the NATS project will continue to thrive in the cloud native open source ecosystem of the CNCF with Synadia’s continued support and involvement.

May 05, 2025

RapDev announced the launch of Arlo, an AI Agent for ServiceNow designed to transform how enterprises manage operational workflows, risk, and service delivery.