The Governance Mismatch
October 23, 2017

Mark Schwartz
Amazon Web Services

DevOps poses a unique challenge and opportunity for IT governance. Traditionally we have governed IT in terms of projects. We lump a number of requirements, fulfilling a number of business needs, together into a bundle we call a project. We then build a business case for that project, put it through some governance process, perhaps an IT steering committee or some variation on one, to decide whether to allow it to proceed and to give it a place in the company's priorities. Once the project is ongoing, the team conducting it reports on its progress against its objectives, and probably against its planned costs and schedule, and some sort of oversight mechanism is in place to review, and perhaps act on, those results. You could say that our unit of governance is the "project," or that we govern at the granularity of the project. The project is a grouping of requirements, a thing that can be planned, an initiative that begins and ends.

Of course project-oriented governance lends itself to the Waterfall model. A fixed set of requirements; a plan; a Gantt chart; a well-defined series of phases; a result at the end – this is a natural way to treat a conglomeration of requirements that has an approved business case and a committed plan.

DevOps offers us a very different manner of execution. It is flow based, with new requirements being pulled into a pipeline, worked on, and deployed quickly to users: it optimizes the lead time for getting requirements into production by automating the delivery process and by eliminating handoffs between functional silos. Each individual requirement travels its own path to production, as if it were a packet making its way across the Internet. Our unit of execution is the individual user story or task, and with very frequent deployments, DevOps can reach single piece flow.

So we find ourselves in a position where we are governing at the project level yet executing at the individual requirement level – a somewhat disturbing mismatch. The consequence is that we are forced to hold requirements in inventory, so to speak, or plan in large batches of requirements.

In order to make a business case and present an adequately sized business proposal to the steering committee, we still need to assemble a large batch of requirements. In order to report on the status of – what? – something that can have a status, I suppose, we still report on the status of projects. We forego, in other words, the full benefits of DevOps – the ability to work leanly by reducing our batch size.

But how else can we govern? What exactly can a steering committee greenlight, and how does it know how that thing is progressing?

I'd like to suggest that the answer is simple and staring us in the face. Or rather, answers, because I believe there are two approaches. The first is to govern by business objectives. We determine a business objective that will have concrete business outcomes, preferably measurable ones. Then we make a business case – formal or informal – that therobjective is worth investing a particular amount of money in. If we decide that it is, we hand the objective to an empowered team and ask them to start accomplishing it – immediately. Because we are in a DevOps world, they should be able to begin deploying functionality virtually right away. We observe the business results they achieve, determine whether they are worth continued investment, and adjust our plans.

The second alternative is to govern IT investment the way we govern the rest of our company – without a governance process. The IT organization is allocated a budget and expected to make good decisions on how to spend it to accomplish the company's objectives. It is assessed and guided like any other part of the company – let's say that the CEO evaluates the CIO's performance and gives feedback to steer IT's direction. What is evaluated is the business outcome of the IT organization's decisions. The advantage of this approach is that it allows for continual transformation, continuous investments in systems, rather than the periodic, on-again-off-again flow of investment when we organize around projects.

It's one thing to reap operational advantages from DevOps. It is a different thing to maximize the value that DevOps can deliver to the enterprise, strategically as well as tactically. For that, we need to rethink governance.

Mark Schwartz, Enterprise Strategist at Amazon Web Services (AWS), is the Author of "A Seat at the Table"

The Latest

September 18, 2018

To celebrate IT Professionals Day 2018 (this year on September 18), the SolarWinds IT Pro Day 2018: A World Powered by Tech Pros survey explores a "Tech PROactive" world where technology professionals have the time, resources, and ability to use their technology prowess to do absolutely anything ...

September 17, 2018

The role of DevOps in capitalizing on the benefits of hybrid cloud has become increasingly important, with developers and IT operations now working together closer than ever to continuously plan, develop, deliver, integrate, test, and deploy new applications and services in the hybrid cloud ...

September 13, 2018

"Our research provides compelling evidence that smart investments in technology, process, and culture drive profit, quality, and customer outcomes that are important for organizations to stay competitive and relevant -- both today and as we look to the future," said Dr. Nicole Forsgren, co-founder and CEO of DevOps Research and Assessment (DORA), referring to the organization's latest report Accelerate: State of DevOps 2018: Strategies for a New Economy ...

September 12, 2018

This next blog examines the security component of step four of the Twelve-Factor methodology — backing services. Here follows some actionable advice from the WhiteHat Security Addendum Checklist, which developers and ops engineers can follow during the SaaS build and operations stages ...

September 10, 2018

When thinking about security automation, a common concern from security teams is that they don't have the coding capabilities needed to create, implement, and maintain it. So, what are teams to do when internal resources are tight and there isn't budget to hire an outside consultant or "unicorn?" ...

September 06, 2018

In evaluating 316 million incidents, it is clear that attacks against the application are growing in volume and sophistication, and as such, continue to be a major threat to business, according to Security Report for Web Applications (Q2 2018) from tCell ...

September 04, 2018

There's a welcome insight in the 2018 Accelerate State of DevOps Report from DORA, because for the first time it calls out database development as a key technical practice which can drive high performance in DevOps ...

August 29, 2018

While everyone is convinced about the benefits of containers, to really know if you're making progress, you need to measure container performance using KPIs.These KPIs should shed light on how a DevOps team is faring in terms of important parameters like speed, quality, availability, and efficiency. Let's look at the specific KPIs to track for each of these broad categories ...

August 27, 2018

Protego Labs recently discovered that 98 percent of functions in serverless applications are at risk, with 16 percent considered "serious" ...

August 23, 2018

After another record year of breaches, The 2018 DevSecOps Community Survey found that 3 in 10 respondents suspected or verified breaches stemming from vulnerabilities in open source components — a 55% increase over 2017, and 121% increase since 2014 ...

Share this