Release Management Through a DevOps Lens
PART 2 of a 4-PART BLOG SERIES
January 27, 2022

Bob Davis
Plutora

A lot of the displeasure that comes with releases in traditional operating environments results from the disengagement between the development and IT operations teams, more aptly known as a "wall of confusion" between the two silos. To remedy this, many organizations have turned to DevOps to break down the silos and deliver more value faster and safer by balancing throughput and stability.

Start with: Release Management Part 1: Why It Exists

There are two key principles in DevOps that can be used as a starting point:

■ Release weekends are bad

■ Releases should be "like breathing"

It's most apparent that these principles are needed when the development team "tosses new code over the wall," putting the onus on the operations team to perform the release with limited knowledge of its makeup or origin.

A number of issues can spring up if this goes wrong:

■ Failure can be disastrous

■ It becomes highly stressful for all involved

■ It may be incredibly expensive not only to fix but also the possibility of fines and reputational damage

■ It may even cost people their jobs and halt career progress

■ It creates tremendous tension between teams, causing people to point fingers and pass the buck

It can be helpful to understand what Transformation Consultant Simone Jo Moore identified as her four critical characteristics of release management in DevOps ways of working:

■ Smaller

■ Faster

■ Safer

■ Frictionless

According to Moore, "The conversations between development and IT operations need to be shifted and improved. Development making sure that IT operations know what they need to know continues to be a failure. Not including IT operations in design is a mistake."

Agile and DevOps are meant to help us work in smaller increments in order to get quicker feedback. This speeds up value delivery while reducing risk. As Moore established, this is a safer way of working that minimizes friction as handoffs between teams are reduced.

Releasing small pieces frequently makes it a bit like breathing in that it's something we do so regularly that it becomes routine and not stressful at all. To that end, Agile and DevOps are geared towards constructing more sustainable work environments and reducing burnout resulting from traditional working patterns. Read on to learn about them in greater detail.

DevOps' Long Term Vision

In an Agile and DevOps world it is possible for teams to release new fixes and features whenever they are ready, which means frequently and on demand. They built it, so they own it. They are autonomous and multifunctional teams that are longstanding and associated with a platform, product or value stream.

Software always remains in a releasable state with their continuous delivery pipeline. The change is automatically implemented and released into production following successful completion of the tests.

While it sounds like a fantastic prospect, getting here from the traditional state described previously is by no means easy. People who have been working a certain way for many years are asked to unlearn all of that and absorb a completely new way of working. These are some steps some organizations have taken to facilitate transition:

Implement a Common Vocabulary

It's crucial to make sure everyone in an organization knows the differences between "deploy" and "release" because they frequently vary. Generally, the release is prepared, deployed to production, and then released to customers. This is why some process frameworks will specifically distinguish deploy from release such as SAFe.

Though the differences between the two blur as fluency in DevOps improves, it's still important for teams and organizations to speak the same language as they evolve by implementing a common vocabulary.

As organizations take up DevOps practices and principles, the new vocabulary will grow. Therefore, it's important to recognize that developing a vigorous, proactive learning capability should be part of the DevOps journey in order to build a universal awareness of what these techniques mean in their environments.

Reduce Risk via Incremental Change in Small Batches

Deploying large bundles or batches of features goes against the DevOps principle of deploying a little at a time and frequently, which reduces risks. The traditional method of having large-scale releases with greater risk requires scheduling and people who can manage the schedules. Teams are then forced to await their release date in the calendar in order to carry out their deployment to production for customer release.

As Forsgren, Humble, and Kim note in Accelerate(link is external), "Reducing batch size is another central element of the lean paradigm — indeed, it was one of the keys to the success of the Toyota production system. Reducing batch sizes reduces cycle times and variability in flow, accelerates feedback, reduces risk and overhead, improves efficiency, increases motivation and urgency, and reduces costs and schedule growth."

Because of its ease of measurement and minimal variability, they classify deployment frequency as an intermediary metric for batch size.

Act with Value Streams in Mind

Doing away with a silo-based way of working is crucial to speeding up value flow from idea to realization. Dedicating a multi-functional, autonomous team to a product or service such as value stream means you can have everyone you need to coordinate the flow of value from idea to realization delivered to the customer. After identifying the value stream as well as the team charged with supporting it, the value stream can be mapped.

Value stream mapping is a critical part of helping a team interpret the flow of value and understand how to make adjustments. Automating the value stream map for continuous adaptation and inspection can be greatly beneficial as doing it manually would be a daunting, time consuming task.

An organization should function in terms of value streams, allowing each value stream to find improvements as well as calculate their own progress autonomously. They can then continually comply with the organization's high-level goals and vision as they make those improvements.

Using Automation to Achieve Consistent, Predictable Pipelines

The CI/CD pipeline's introduction equipped teams with the capacity to test and fail earlier than ever before. Meanwhile, orchestration tools allowed for consistency in environment provisioning with cloud technologies. As a result, releases are consistent as they are templated and predictable as teams have done them before and are familiar with the outcomes, which, in turn, reduces risk.

Utilizing a CI/CD pipeline or, even better, an end-to-end DevOps toolchain that automates the whole value stream as well as providing continuous feedback, traceability and continuous compliance gives value stream teams visibility over every aspect of the value journey. Comprehensive testing can also be achieved early as a result of automation, allowing for greater confidence at the point of release.

However, extracting insights into the DevOps toolchain can be challenging on its own, possibly calling for manual interventions or even a custom-built dashboard that can retrieve the data. A value stream management platform can alleviate this issue by connecting every part of the DevOps toolchain to consolidate the value stream. This enables continuous inspection of the value flow, new insights to be gained and adaptations that accelerate flow to be implemented.

Architect for Incremental Build, Test, Deploy, Release

Enormous systems full of dependencies are one of the issues in traditional enterprises. Tightly coupled systems force teams to have to build, test, deploy and release the entire thing at once. Architecting loosely coupled systems allow for the work batch size to be isolated and much easier to manage.

Stay tuned for the next part in the series, which examines the incremental practices and behaviors needed to fully implement a DevOps adoption.

Bob Davis is CMO at Plutora
Share this

Industry News

May 05, 2025

The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, and Synadia announced that the NATS project will continue to thrive in the cloud native open source ecosystem of the CNCF with Synadia’s continued support and involvement.

May 05, 2025

RapDev announced the launch of Arlo, an AI Agent for ServiceNow designed to transform how enterprises manage operational workflows, risk, and service delivery.

May 01, 2025

Check Point® Software Technologies Ltd.(link is external) announced that its Quantum Firewall Software R82 — the latest version of Check Point’s core network security software delivering advanced threat prevention and scalable policy management — has received Common Criteria EAL4+ certification, further reinforcing its position as a trusted security foundation for critical infrastructure, government, and defense organizations worldwide.

May 01, 2025

Postman announced full support for the Model Context Protocol (MCP), helping users build better AI Agents, faster.

May 01, 2025

Opsera announced new Advanced Security Dashboard capabilities available as an extension of Opsera's Unified Insights for GitHub Copilot.

May 01, 2025

Lineaje launched new capabilities including Lineaje agentic AI-powered self-healing agents that autonomously secure open-source software, source code and containers, Gold Open Source Packages and Gold Open Source Images that enable organizations to source trusted, pre-fixed open-source software, and a software crawling and analysis engine, SCA360, that discovers and contextualizes risks at all software development stages.

April 30, 2025

Lenses.io announced the release of Lenses 6.0, enabling organizations to modernize applications and systems with real-time data as AI adoption accelerates.

April 30, 2025

Sonata Software has achieved Amazon Web Services (AWS) DevOps Competency status.

April 29, 2025

vFunction® announced significant platform advancements that reduce complexity across the architectural spectrum and target the growing disconnect between development speed and architectural integrity.

April 29, 2025

Sonatype® introduced major enhancements to Repository Firewall that expand proactive malware protection across the enterprise — from developer workstations to the network edge.

April 29, 2025

Aqua Security introduced Secure AI, full lifecycle security from code to cloud to prompt.

April 29, 2025

Salt Security announced the launch of the Salt Model Context Protocol (MCP) Server, giving enterprise teams a novel access point of interaction with their API infrastructure, leveraging natural language and artificial intelligence (AI).

April 28, 2025

The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, announced the graduation of in-toto, a software supply chain security framework developed at the NYU Tandon School of Engineering.

April 28, 2025

SnapLogic announced the launch of its next-generation API management (APIM) solution, helping organizations accelerate their journey to a composable and agentic enterprise.