The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, announced the graduation of Argo, which will join other graduated projects such as Kubernetes, Prometheus, and Envoy.
The application of maturity metrics to everything that we do in today's business environment frequently creates the requirement to perform difficult, far-reaching calculations.
It's not necessarily those measurements that span huge sets of complex data that present the most challenging prospects. Often, it's a compilation of those metrics that attempt to analyze the advancement of fuzzier, process-oriented initiatives that can leave one grasping for just the right analysis methods.
Attempting to weigh the current level of DevOps maturity within your organization is precisely one of those daunting propositions that can leave today's business and technology pros searching for meaningful answers.
Sure, there are some well-established metrics that can serve as inherent measurements of overall DevOps success, including deployment frequency rates, average lead times, meant time to recovery (MTTR), and of course, any figures resulting from dedicated Application Performance Monitoring (APM).
Yet, perhaps even more valuable than some of these numbers, or of greater import to practitioners for purposes of self-assessment, are metrics that help analyze precisely how ongoing DevOps adoption compares to similar efforts among peers.
At the end of the day, widely touted unicorns can publicize stunning evidence of their agile transformations, driven by DevOps methodologies; yet, for most organizations this is a long-term, iterative process aided greatly by some understanding of how they compare to less revolutionary examples.
After all, getting a feel for where you're ahead of the curve or behind the 8-ball might be just the thing to help DevOps-oriented teams offer evidence of progress, or the need for increased investment, the next time management comes looking for answers.
For instance, related to development, perhaps your teams are already actively tracking feature request lead times; but is there an agreement between business, dev and ops regarding the performance of critical services (transaction counts, performance, uptime, etc.) necessary to meet pre-defined business goals?
In the deployment arena, you likely have systems in place to note changes in frequency; however, does your organizational structure and tooling support cross-functional teams that put greater emphasis on the processes associated with releasing new capabilities, rather than supporting individual roles?
As far as management is concerned, you're probably employing APM to ensure improved visibility, response, uptime and availability. That said, is your monitoring able to distinguish the most critical and recurrent problems, and how they impact business services – without necessitating lengthy configuration and base-lining?