Git Shipping Faster: Software Delivery Metrics that Drive Business ROI in 2025
April 22, 2025

Rob Zuber
CircleCI

As today’s engineering teams are under immense pressure to find an edge, the ability to deliver high-quality software quickly and reliably has become a critical differentiator. Advancements like CI/CD automation, infrastructure as code, and AI-powered developer tools have raised the floor for what it means to be good at delivering software. The 2025 State of Software Delivery provides valuable insights into how engineering teams across different industries and company sizes are performing against key delivery metrics — and more importantly, how these metrics translate directly to business value.


The Four Metrics That Matter

As software continues to eat the world, organizations are increasingly focusing on four fundamental metrics that measure engineering impact: duration, throughput, success rate, and mean time to recovery (MTTR). These metrics aren't just technical indicators — they represent real business outcomes that impact an organization's bottom line.

Duration, which measures the time from when a workflow is triggered until completion, has a median of 2 minutes 43 seconds across industries. However, the top performers complete workflows in under 38 seconds, while slower teams take more than 8 minutes. This disparity is significant when we consider the impact on productivity. For instance, reducing workflow duration from 20 minutes to 10 minutes across 300 daily runs could reclaim 750,000 minutes of developer time annually — translating to over $1 million in productivity gains at typical engineering compensation rates.

Throughput, which measures the average number of workflow runs per day, serves as a key indicator of team productivity. The median throughput in the dataset is 1.64 workflow runs per day, but high-performing organizations run thousands of workflows daily. This widening gap indicates the untapped potential in most software teams. A targeted investment in a dedicated platform engineering team to remove friction from development pipelines can boost throughput significantly, delivering the equivalent impact of adding dozens of additional engineers without the corresponding headcount increase.

Success rate measures the percentage of runs that complete without failure. While failures are expected (even useful) during development on feature branches, maintaining high success rates on the main branch is crucial for deploy-readiness. The average main-branch success rate is 82.15%, still short of the 90% benchmark achieved by top performers. Improving from 75% to 90% success rate could potentially save thousands of engineering hours that would otherwise be spent debugging and resolving pipeline issues.

Mean time to recovery (MTTR) measures how long it takes teams to resolve workflow failures. The median MTTR is 63.8 minutes, with top performers resolving issues in under 15 minutes. However, the average swells to 24.3 hours due to a long tail of extended recovery times. Reducing MTTR from 4 hours to 90 minutes could reclaim tens of thousands of hours for innovation annually. Keep in mind, investing in critical projects that keep the business running is where focus should be when it comes to MTTR e.g., the main branch. Most organizations will have other branches that don’t meet these standards and that’s okay.

Company Size and Team Structure Matter

As companies grow, factors such as approval processes, compliance requirements, and business unit coordination can significantly influence how pipelines are implemented and managed as they deliver on different company priorities. Small organizations (2-20 employees) achieve the highest workflow success rates at 85%, but struggle with recovery when failures occur. Mid-sized companies (51-100 employees) demonstrate balanced performance, combining fast recovery times with high throughput and solid success rates.

Teams of 6-10 engineers achieve the fastest recovery speed with a median MTTR of 29 minutes. As teams grow to around 50 engineers, throughput remains flat but recovery times lengthen dramatically, reaching a peak of 170 minutes. Interestingly, teams of 100+ developers show effective scaling practices — their median MTTR of 77 minutes is significantly better than 51-100 person teams, suggesting investment in tools and processes that help manage complexity.

These findings suggest specific strategies for different organization sizes:

Small companies should focus on building resilient pipelines that can run independently when limited staff are pulled into other priorities.

Mid-sized companies should maintain their quick recovery advantages while standardizing processes across the organization.

Large companies should streamline change management and approval flows without compromising control, balancing build speed optimization with processes that scale.

All in all, engineering team size has significant implications for delivery performance. It shapes critical dynamics such as communication patterns, role specialization, and workflow complexity. Smaller teams tend to move quickly with less coordination overhead, while larger teams must navigate dependencies, standardization, and process complexity as they scale. Understanding these trade-offs is key to optimizing development velocity and reliability.

Industry Performance Reveals Surprising Leaders

The top performers in workflow duration span regulated sectors like healthcare and defense, as well as infrastructure-critical industries like utilities and distribution. Similarly, the highest throughput rates are achieved across diverse sectors from utilities to banking, retail, and education.

Airlines and biotech lead in success rates, exceeding the 90% benchmark on main-branch workflows — understandable for industries where software failures can endanger lives. While these industries experience fewer failures, their recovery times are among the longest, reflecting the complex validation processes that make resolving rare failures more time-intensive. In contrast, industries like consumer goods, hospitality, and software excel in rapid recovery, likely due to their "ship fast, fix fast" approach.

The data contradicts the assumption that heavily regulated industries can't achieve high performance. With proper tooling and automation, even industries with strict operational requirements can achieve elite development velocity.

The "Elite" Performance Gap

The gap between average performers and elite teams is striking and worth calling out. While most teams measure daily deployments in single digits, elite performers ship thousands (yes, thousands) of changes each day. The top organization in the dataset achieved nearly 15,000 daily workflows — a scale simply unachievable with manual processes or basic automation tools.

This type of scale requires robust infrastructure that can deliver continuous feedback at every stage without impeding velocity. Top-performing teams have turned software delivery from a cost center into a value multiplier by investing in tooling and practices that automate complex workflows, optimize resources, and provide actionable insights.

Translating Engineering Metrics to Business Value

The true value of these metrics lies in their direct connection to business outcomes. Duration improvements translate to faster time-to-market and more productive developers. Higher throughput means more features delivered to customers. Better success rates reduce rework and improve quality. Faster recovery times minimize the business impact of failures.

For executives and business leaders, these aren't just engineering metrics — they represent competitive advantages that compound over time.

Organizations looking to improve should consider three key takeaways:

1. Industry ≠ destiny: Top performers span regulated and unregulated sectors, showing that proper tooling and automation matter more than industry requirements.

2. Quality gates drive productivity: Longer validation cycles correlate with higher throughput. Investing in testing accelerates, rather than inhibits, delivery speed.

3. Let risk inform response: Match your CI/CD strategy to failure impact. Aim for early issue detection in safety-critical systems and frequent deployments with fast fixes in customer-facing services.

As software continues to drive business transformation across industries, organizations that work towards continuously improving on these key delivery metrics will gain significant advantages in speed, quality, and market responsiveness. The impressive performance of elite teams shows what's possible — and provides a roadmap for others to follow.

Rob Zuber is CTO at CircleCI
Share this

Industry News

May 15, 2025

GitLab announced the launch of GitLab 18, including AI capabilities natively integrated into the platform and major new innovations across core DevOps, and security and compliance workflows that are available now, with further enhancements planned throughout the year.

May 15, 2025

Perforce Software is partnering with Siemens Digital Industries Software to transform how smart, connected products are designed and developed.

May 15, 2025

Reply launched Silicon Shoring, a new software delivery model powered by Artificial Intelligence.

May 15, 2025

CIQ announced the tech preview launch of Rocky Linux from CIQ for AI (RLC-AI), an operating system engineered and optimized for artificial intelligence workloads.

May 14, 2025

The Linux Foundation, the nonprofit organization enabling mass innovation through open source, announced the launch of the Cybersecurity Skills Framework, a global reference guide that helps organizations identify and address critical cybersecurity competencies across a broad range of IT job families; extending beyond cybersecurity specialists.

May 14, 2025

CodeRabbit is now available on the Visual Studio Code editor.

The integration brings CodeRabbit’s AI code reviews directly into Cursor, Windsurf, and VS Code at the earliest stages of software development—inside the code editor itself—at no cost to the developers.

May 14, 2025

Chainguard announced Chainguard Libraries for Python, an index of malware-resistant Python dependencies built securely from source on SLSA L2 infrastructure.

May 14, 2025

Sysdig announced the donation of Stratoshark, the company’s open source cloud forensics tool, to the Wireshark Foundation.

May 13, 2025

Pegasystems unveiled Pega Predictable AI™ Agents that give enterprises extraordinary control and visibility as they design and deploy AI-optimized processes.

May 13, 2025

Kong announced the introduction of the Kong Event Gateway as a part of their unified API platform.

May 13, 2025

Azul and Moderne announced a technical partnership to help Java development teams identify, remove and refactor unused and dead code to improve productivity and dramatically accelerate modernization initiatives.

May 13, 2025

Parasoft has added Agentic AI capabilities to SOAtest, featuring API test planning and creation.

May 13, 2025

Zerve unveiled a multi-agent system engineered specifically for enterprise-grade data and AI development.

May 12, 2025

LambdaTest, a unified agentic AI and cloud engineering platform, has announced its partnership with MacStadium, the industry-leading private Mac cloud provider enabling enterprise macOS workloads, to accelerate its AI-native software testing by leveraging Apple Silicon.

May 12, 2025

Tricentis announced a new capability that injects Tricentis’ AI-driven testing intelligence into SAP’s integrated toolchain, part of RISE with SAP methodology.