Data-Driven Benchmarks for High Performing Engineering Teams
February 02, 2021

Michael Stahnke
CircleCI

What does a high-performing engineering team really look like? It can be hard to know, but diving into the effectiveness of your delivery capabilities can tell you quite a bit.

Do deploys require a lot of cross-team coordination?

When production breaks, is it a long time before you can get it back up and running?

Are you getting feedback and results from your changes quickly?

The global challenges faced in 2020 have highlighted the competitive differentiator that being a well-oiled software delivery team provides. Your team's ability to deliver is a competitive advantage, and industry benchmarks are the only way to get a clear understanding of how your DevOps practices measure up.

When it comes to optimizing continuous integration and continuous delivery (CI/CD), Throughput, Duration, Mean Time to Recovery, and Success Rate are the most important metrics to consider. Measuring these benchmarks will tell you whether you're delivering product to your customers in the most efficient way possible.


How to Measure DevOps Success

After analyzing more than 55 million data points from 44,000 organizations on CircleCI, our 2020 State of Software Delivery report found baseline numbers for each of these benchmarks to guide engineering teams in making smarter decisions around CI/CD.

Throughput: the number of workflow runs matters less than being at a deploy-ready state most or all of the time.

Duration: teams want to aim for workflow durations in the range of five to ten minutes.

Mean Time to Recovery: teams should aim to recover from any failed runs by fixing or reverting in under an hour.

Success Rate: success rates above 90% should be your standard for the default branch of an application.

While some teams may have business-specific reasons for choosing different metrics as goals, any effort to improve engineering productivity or process will hinge on your ability to measure your team's baseline metrics and make incremental improvements.

4 Key Benchmarks

Let's take a closer look at each of these metrics so you can understand what they mean and why they're valuable.

Throughput is defined as the average number of workflow runs per day. I recommend monitoring Throughput rates rather than setting explicit goals. It's important to see how often things are happening, and Throughput is a direct measurement of commit frequency. A particular number of deploys per day is not the goal, but continuous validation of your codebase via your pipeline is.

Duration is defined as the length of time it takes for a workflow to run. It's the most important metric in this list because creating a fast feedback cycle hinges on Duration.

It's important to emphasize here that speed alone is not the goal. A workflow without tests can run quickly and return green, a signal that is not helpful to anyone. Without a quality testing suite, workflows with short durations aren't contributing valuable information to the feedback cycle. The goal, then, is rich information combined with short Duration.

Mean Time to Recovery is defined as the average time between failures and their next success. This is the second most important metric in this list. Because Mean Time to Recovery improves with more comprehensive test coverage, this metric can be a proxy for how well-tested your application is.

Failed build, valuable signal, rapid fix, passing build: continuous integration makes these rapid feedback loops possible. The fast signals enable teams to try new things and respond to any impact immediately.

Success Rate is defined as the number of passing runs divided by the total number of runs over a period of time. Git-flow models that rely on topic branch development, rather than default branch development, enable teams to keep their default branches green.

By scoping feature development to topic branches, we can differentiate between intentional experiments (where failing builds are valuable and expected) and stability issues (where failing builds are undesirable). Success Rate on the default branch is a more meaningful metric than Success Rate on a topic branch.

How Does Your Team Measure Up?

While there is no universal standard that every team should aspire to, the data collected on software delivery patterns globally show that there are reasonable benchmarks for teams to set as goals.

How does your team compare to the most successful teams developing software today?

Michael Stahnke is VP of Platform at CircleCI
Share this

Industry News

December 03, 2024

SmartBear announced its acquisition of QMetry, provider of an AI-enabled digital quality platform designed to scale software quality.

December 03, 2024

Red Hat signed a strategic collaboration agreement (SCA) with Amazon Web Services (AWS) to scale availability of Red Hat open source solutions in AWS Marketplace, building upon the two companies’ long-standing relationship.

December 03, 2024

CloudZero announced the launch of CloudZero Intelligence — an AI system powering CloudZero Advisor, a free, publicly available tool that uses conversational AI to help businesses accurately predict and optimize the cost of cloud infrastructure.

December 03, 2024

Opsera has been accepted into the Amazon Web Services (AWS) Independent Software Vendor (ISV) Accelerate Program, a co-sell program for AWS Partners that provides software solutions that run on or integrate with AWS.

December 02, 2024

Spectro Cloud is a launch partner for the new Amazon EKS Hybrid Nodes feature debuting at AWS re:Invent 2024.

December 02, 2024

Couchbase unveiled Capella AI Services to help enterprises address the growing data challenges of AI development and deployment and streamline how they build secure agentic AI applications at scale.

December 02, 2024

Veracode announced innovations to help developers build secure-by-design software, and security teams reduce risk across their code-to-cloud ecosystem.

December 02, 2024

Traefik Labs unveiled the Traefik AI Gateway, a centralized cloud-native egress gateway for managing and securing internal applications with external AI services like Large Language Models (LLMs).

December 02, 2024

Generally available to all customers today, Sumo Logic Mo Copilot, an AI Copilot for DevSecOps, will empower the entire team and drastically reduce response times for critical applications.

December 02, 2024

iTMethods announced a strategic partnership with CircleCI, a continuous integration and delivery (CI/CD) platform. Together, they will deliver a seamless, end-to-end solution for optimizing software development and delivery processes.

November 26, 2024

Check Point® Software Technologies Ltd. has been recognized as a Leader and Fast Mover in the latest GigaOm Radar Report for Cloud-Native Application Protection Platforms (CNAPPs).

November 26, 2024

Spectro Cloud, provider of the award-winning Palette Edge™ Kubernetes management platform, announced a new integrated edge in a box solution featuring the Hewlett Packard Enterprise (HPE) ProLiant DL145 Gen11 server to help organizations deploy, secure, and manage demanding applications for diverse edge locations.

November 26, 2024

Red Hat announced the availability of Red Hat JBoss Enterprise Application Platform (JBoss EAP) 8 on Microsoft Azure.

November 26, 2024

Launchable by CloudBees is now available on AWS Marketplace, a digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on Amazon Web Services (AWS).