Taking a Low-Risk Approach to DevOps for Mainframe Organizations - Part 2
July 17, 2018

Mark Levy
Micro Focus

In my first blog in this series, I highlighted some of the main challenges teams face with trying to scale mainframe DevOps.

Start with Taking a Low-Risk Approach to DevOps for Mainframe Organizations - Part 1

To get past these hurdles, the key is to develop an incremental approach that enables teams to capture value along each step of the journey. With this approach, software bottlenecks are identified and addressed based on the business need – enabling Dev, QA, and Ops teams to work together to deliver better business outcomes. Here are the three major steps for taking an incremental approach to DevOps.


Make Work Visible

The first thing you need to do is "See the System." It's important to get a common view of the work to ensure transparency across the organization. At a system level, you'll need to create a common view of your mainframe deployment pipelines and their interdependencies with distributed environments. This includes highlighting the bottlenecks, waste, and other inefficiencies that can be optimized by using DevOps practices. Ultimately, this common view will serve as the forcing function to help align teams across the organization.

Gaining application visibility, control, and insight will also provide teams with a better understanding of the impact of a software change at the application level. This transparency will allow for better estimates of the work in progress, which can be leveraged to reduce rework and provide early detection of production issues. By embracing this sort of agile development practice and leveraging modern development IDEs, developers will be able to discover issues earlier in the process – boosting productivity and delivering secure features faster to improve collaboration and alignment across both mainframe and distributed teams.

Integrate into the DevOps Toolchain

In order to support faster and more frequent releases, the next step is to ensure the DevOps toolchain is integrated across the entire value stream – from planning phase straight through to managing the application in production. This will allow for a seamless "best of breed" integration across mainframe tools into the broader DevOps toolchain eco-system.

If your current set of tools don't provide adequate integration, it's time to consider upgrading to more modern mainframe solutions. Think about it this way: the more complex your software delivery process is, the greater the need for a flexible, adaptive, and integrated DevOps toolchain.

Furthermore, the integration architecture needs to be open and extensible, so that it's capable of integrating with open source tools while maintaining access to and integrity with core systems and data. This will help reduce your reliance on costly mainframe infrastructure.

Optimize the Mainframe Deployment Pipeline

Once the DevOps toolchain has been integrated, you can begin drilling down into pipeline optimization. The previously discussed common view of the mainframe deployment pipelines should provide guidance around sources of waste and long lead times. Primary sources of application delivery cost and waste include:

■ Lack of understanding of the business requirements leading to high development rework costs and long lead times

■ Too much manual effort in building, provisioning, testing, and deploying applications and environments

■ Too many meetings and slow approval processes around change and release management

■ Failed deployments and production incidents 

A good place to start is with automation. Automating mainframe environment provisioning, testing, and deployments will dramatically reduce manual effort, increase deployment frequency, decrease lead times and produce fewer production incidents.

Having the option to automate and re-host mainframe test environments can dramatically accelerate time-to-market at a much lower cost. This is because testing can consume an enormous amount of mainframe processing power, which keeps mainframe costs high. Lead times for mainframe test environments are often days to weeks. Re-hosting pre-production testing from the mainframe onto lower cost platforms can reduce lead times from days to minutes. In addition, re-hosting test environments allows testing to scale up as required with significantly lower operating costs.

Mainframe Teams Need to Take the Initiative

In this incredibly competitive digital economy, the mainframe can serve as a critical competitive differentiator, but only if it participates in the digital transformation of the enterprise. The business requires "on-demand" software delivery, which means mainframe teams have to embrace the DevOps culture of change and continuous improvement.

By breaking out of silos and taking the initiative to implement an incremental strategy to Mainframe DevOps, teams will be able to spend less time focused on delivering applications and more time on doing innovative work that adds real value to the organization.

Mark Levy is Director of Strategy at Micro Focus

The Latest

August 15, 2018

Microservices are a hot topic in IT circles these days. The idea of a modular approach to system building – where you have numerous, smaller software services that talk to each other instead of monolithic components – has many benefits ...

August 13, 2018

Agile is expanding within the enterprise. Agile adoption is growing within organizations, both more broadly and deeply, according to the 12th annual State of Agile report from CollabNet VersionOne. A higher percentage of respondents this year report that "all or almost all" of their teams are agile, and that agile principles and practices are being adopted at higher levels in the organization ...

August 09, 2018

For the past 13 years, the Ponemon Institute has examined the cost associated with data breaches of less than 100,000 records, finding that the costs have steadily risen over the course of the study. The average cost of a data breach was $3.86 million in the 2018 study, compared to $3.50 million in 2014 – representing nearly 10 percent net increase over the past 5 years of the study ...

August 08, 2018

Hidden costs in data breaches – such as lost business, negative impact on reputation and employee time spent on recovery – are difficult and expensive to manage, according to the 2018 Cost of a Data Breach Study, sponsored by IBM Security and conducted by Ponemon Institute. The study found that the average cost of a data breach globally is $3.86 million ...

August 06, 2018

The previous chapter in this WhiteHat Security series discussed dependencies as the second step of the Twelve-Factor App. This next chapter examines the security component of step three of the Twelve-Factor methodology — storing configurations within the environment.

August 02, 2018

Results from new Forrester Consulting research reveal the 20 most important Agile and DevOps quality metrics that separate DevOps/Agile experts from their less advanced peers ...

July 31, 2018

Even organizations that understand the importance of cybersecurity in theory often stumble when it comes to marrying security initiatives with their development and operations processes. Most businesses agree that everyone should be responsible for security, but this principle is not being upheld on a day-to-day basis in many organizations. That’s bad news for everyone. Here are some best practices for implementing SecOps ...

July 30, 2018

While the technologies, processes, and cultural shifts of DevOps have improved the ability of software teams to deliver reliable work rapidly and effectively, security has not been a focal point in the transformation of cloud IT infrastructure. SecOps is a methodology that seeks to address this by operationalizing and hardening security throughout the software lifecycle ...

July 26, 2018

Organizations are shifting away from traditional, monolithic architectures, with three-quarters of survey respondents delivering at least some of their applications and more than one-third delivering most of their applications as microservices, according to the State of DevOps Observability Report from Scalyr ...

July 24, 2018

What top considerations must companies make to ensure – or at least help improve – Agile at scale? The following are key techniques and practices to help accelerate Agile delivery rollouts and scale Agile and DevOps in the Enterprise ...

Share this