Broadcom announced the general availability of VMware Tanzu Platform 10 that establishes a new layer of abstraction across Cloud Foundry infrastructure foundations to make it easier, faster, and less expensive to bring new applications, including GenAI applications, to production.
A lot of the displeasure that comes with releases in traditional operating environments results from the disengagement between the development and IT operations teams, more aptly known as a "wall of confusion" between the two silos. To remedy this, many organizations have turned to DevOps to break down the silos and deliver more value faster and safer by balancing throughput and stability.
Start with: Release Management Part 1: Why It Exists
There are two key principles in DevOps that can be used as a starting point:
■ Release weekends are bad
■ Releases should be "like breathing"
It's most apparent that these principles are needed when the development team "tosses new code over the wall," putting the onus on the operations team to perform the release with limited knowledge of its makeup or origin.
A number of issues can spring up if this goes wrong:
■ Failure can be disastrous
■ It becomes highly stressful for all involved
■ It may be incredibly expensive not only to fix but also the possibility of fines and reputational damage
■ It may even cost people their jobs and halt career progress
■ It creates tremendous tension between teams, causing people to point fingers and pass the buck
It can be helpful to understand what Transformation Consultant Simone Jo Moore identified as her four critical characteristics of release management in DevOps ways of working:
■ Smaller
■ Faster
■ Safer
■ Frictionless
According to Moore, "The conversations between development and IT operations need to be shifted and improved. Development making sure that IT operations know what they need to know continues to be a failure. Not including IT operations in design is a mistake."
Agile and DevOps are meant to help us work in smaller increments in order to get quicker feedback. This speeds up value delivery while reducing risk. As Moore established, this is a safer way of working that minimizes friction as handoffs between teams are reduced.
Releasing small pieces frequently makes it a bit like breathing in that it's something we do so regularly that it becomes routine and not stressful at all. To that end, Agile and DevOps are geared towards constructing more sustainable work environments and reducing burnout resulting from traditional working patterns. Read on to learn about them in greater detail.
DevOps' Long Term Vision
In an Agile and DevOps world it is possible for teams to release new fixes and features whenever they are ready, which means frequently and on demand. They built it, so they own it. They are autonomous and multifunctional teams that are longstanding and associated with a platform, product or value stream.
Software always remains in a releasable state with their continuous delivery pipeline. The change is automatically implemented and released into production following successful completion of the tests.
While it sounds like a fantastic prospect, getting here from the traditional state described previously is by no means easy. People who have been working a certain way for many years are asked to unlearn all of that and absorb a completely new way of working. These are some steps some organizations have taken to facilitate transition:
Implement a Common Vocabulary
It's crucial to make sure everyone in an organization knows the differences between "deploy" and "release" because they frequently vary. Generally, the release is prepared, deployed to production, and then released to customers. This is why some process frameworks will specifically distinguish deploy from release such as SAFe.
Though the differences between the two blur as fluency in DevOps improves, it's still important for teams and organizations to speak the same language as they evolve by implementing a common vocabulary.
As organizations take up DevOps practices and principles, the new vocabulary will grow. Therefore, it's important to recognize that developing a vigorous, proactive learning capability should be part of the DevOps journey in order to build a universal awareness of what these techniques mean in their environments.
Reduce Risk via Incremental Change in Small Batches
Deploying large bundles or batches of features goes against the DevOps principle of deploying a little at a time and frequently, which reduces risks. The traditional method of having large-scale releases with greater risk requires scheduling and people who can manage the schedules. Teams are then forced to await their release date in the calendar in order to carry out their deployment to production for customer release.
As Forsgren, Humble, and Kim note in Accelerate, "Reducing batch size is another central element of the lean paradigm — indeed, it was one of the keys to the success of the Toyota production system. Reducing batch sizes reduces cycle times and variability in flow, accelerates feedback, reduces risk and overhead, improves efficiency, increases motivation and urgency, and reduces costs and schedule growth."
Because of its ease of measurement and minimal variability, they classify deployment frequency as an intermediary metric for batch size.
Act with Value Streams in Mind
Doing away with a silo-based way of working is crucial to speeding up value flow from idea to realization. Dedicating a multi-functional, autonomous team to a product or service such as value stream means you can have everyone you need to coordinate the flow of value from idea to realization delivered to the customer. After identifying the value stream as well as the team charged with supporting it, the value stream can be mapped.
Value stream mapping is a critical part of helping a team interpret the flow of value and understand how to make adjustments. Automating the value stream map for continuous adaptation and inspection can be greatly beneficial as doing it manually would be a daunting, time consuming task.
An organization should function in terms of value streams, allowing each value stream to find improvements as well as calculate their own progress autonomously. They can then continually comply with the organization's high-level goals and vision as they make those improvements.
Using Automation to Achieve Consistent, Predictable Pipelines
The CI/CD pipeline's introduction equipped teams with the capacity to test and fail earlier than ever before. Meanwhile, orchestration tools allowed for consistency in environment provisioning with cloud technologies. As a result, releases are consistent as they are templated and predictable as teams have done them before and are familiar with the outcomes, which, in turn, reduces risk.
Utilizing a CI/CD pipeline or, even better, an end-to-end DevOps toolchain that automates the whole value stream as well as providing continuous feedback, traceability and continuous compliance gives value stream teams visibility over every aspect of the value journey. Comprehensive testing can also be achieved early as a result of automation, allowing for greater confidence at the point of release.
However, extracting insights into the DevOps toolchain can be challenging on its own, possibly calling for manual interventions or even a custom-built dashboard that can retrieve the data. A value stream management platform can alleviate this issue by connecting every part of the DevOps toolchain to consolidate the value stream. This enables continuous inspection of the value flow, new insights to be gained and adaptations that accelerate flow to be implemented.
Architect for Incremental Build, Test, Deploy, Release
Enormous systems full of dependencies are one of the issues in traditional enterprises. Tightly coupled systems force teams to have to build, test, deploy and release the entire thing at once. Architecting loosely coupled systems allow for the work batch size to be isolated and much easier to manage.
Stay tuned for the next part in the series, which examines the incremental practices and behaviors needed to fully implement a DevOps adoption.
Industry News
Tricentis announced the expansion of its test management and analytics platform, Tricentis qTest, with the launch of Tricentis qTest Copilot.
Redgate is introducing two new machine learning (ML) and artificial intelligence (AI) powered capabilities in its test data management and database monitoring solutions.
Upbound announced significant advancements to its platform, targeting enterprises building self-service cloud environments for their developers and machine learning engineers.
Edera announced the availability of Am I Isolated, an open source container security benchmark that probes users runtime environments and tests for container isolation.
Progress announced 10 years of partnership with emt Distribution — a leading cybersecurity distributor in the Middle East and Africa.
Port announced $35 million in Series B funding, bringing its total funding to $58M to date.
Parasoft has made another step in strategically integrating AI and ML quality enhancements where development teams need them most, such as using natural language for troubleshooting or checking code in real time.
MuleSoft announced the general availability of full lifecycle AsyncAPI support, enabling organizations to power AI agents with real-time data through seamless integration with event-driven architectures (EDAs).
Numecent announced they have expanded their Microsoft collaboration with the launch of Cloudpager's new integration to App attach in Azure Virtual Desktop.
Progress announced the completion of the acquisition of ShareFile, a business unit of Cloud Software Group, providing a SaaS-native, AI-powered, document-centric collaboration platform, focusing on industry segments including business and professional services, financial services, industrial and healthcare.
Incredibuild announced the acquisition of Garden, a provider of DevOps pipeline acceleration solutions.
The Open Source Security Foundation (OpenSSF) announced an expansion of its free course “Developing Secure Software” (LFD121).
Redgate announced that its core solutions are listed in Amazon Web Services (AWS) Marketplace.
LambdaTest introduced a suite of new features to its AI-powered Test Manager, designed to simplify and enhance the test management experience for software development and QA teams.