The Mainframe is Here to Stay: 5 Take-Aways for Mainframe DevOps
May 03, 2018

Chris O'Malley
Compuware

Forrester Research recently conducted a survey of 160 mainframe users across the globe and found that mainframe workloads are increasing, driven by trends including blockchain, modern analytics and more mobile activity hitting the platform. 57 percent of these enterprises currently run more than half of their business-critical applications on the platform, with this number expected to increase to 64 percent by next year. No surprise there, as the mainframe's security, reliability, performance, scalability and efficiency have consistently proven unbeatable for modern transactional applications.

However, these enterprises have only replaced 37 percent of the mainframe workforce lost over the past five years. The prospect of increased workloads, combined with shrinking mainframe skillsets, has huge implications for mainframe DevOps. The only way for organizations to solve this skills gap crisis is by optimizing developer productivity. Drilling down a level further, what does this all mean for mainframe DevOps?

1. DevOps teams must view and treat the mainframe as a first-class digital citizen

DevOps teams are obsessed with establishing and measuring Key Performance Indicators (KPIs) to continually improve outcomes. These KPIs concern quality (minimizing the number of code defects that make it into production), efficiency (time spent developing) and velocity (the number of software products or features that can be rolled out in a given amount of time).

72 percent of firms noted their customer-facing applications are completely or very reliant on mainframe processing

While common within non-mainframe teams, the concept of KPIs within mainframe teams can be foreign, in spite of how vital mainframe processing is to the customer experience. 72 percent of firms noted their customer-facing applications are completely or very reliant on mainframe processing.

While firms recognize the importance of quality, velocity and efficiency, significant percentages (27, 28 and 39 percent respectively) are not measuring them. This is a cause for concern: the reduction of mainframe-specific developer expertise poses a serious threat to quality, velocity and efficiency, yet management has no means to quantify the risks.

2. Teams must honestly assess developer behavior on the platform and proactively identify areas for improvement

Having mainframe development KPIs in place and consistently measuring progress against them is a great first step, but it's not enough. Organizations remain heavily dependent on mainframe applications, and DevOps teams can't afford to hypothesize what changes may or may not move the needle on KPIs — it is much better to rely on real empirical evidence.

New approaches now leverage machine learning applied to real behavioral data. This enables teams to make smart, high-impact decisions that support continuous DevOps improvements.

3. Teams must integrate the mainframe into virtually every aspect of the DevOps toolchain

A DevOps toolchain refers to the set or combination of tools aiding in the delivery, development and management of applications created in a DevOps environment. These toolchains support greater productivity as developers work across an end-to-end application; however, mainframe code — which supports the vital transaction-processing component of most applications — is often excluded. This slows down the entire effort and dilutes the positive impact of such tools on other application components.

Mainframe code must be fully incorporated in these toolchains across the entire delivery pipeline including source code management, code coverage, unit testing, deployment and more.

4. Mainframe workloads require a "cost-aware" approach

Cost optimization becomes a key consideration as the mainframe takes on bigger workloads

Cost optimization becomes a key consideration as the mainframe takes on bigger workloads. Many organizations are unfamiliar with exactly how mainframe licensing costs (MLCs) are determined and don't make sufficient attempts to manage them, which can drive up costs unnecessarily.

MLCs are determined by a metric known as the peak four-hour rolling average MSU (million services units) value across all logical partitions (LPARs). In simple terms, MSU represents an amount of processing work. These can be kept at a minimum by diligently tuning each application to minimize its individual consumption of mainframe resources, while average MSUs can be kept in check by spreading out the timing of application workloads in order to minimize collective utilization peaks, thus keeping the average lower.

New techniques provide visually intuitive insight into how batch jobs are being initiated and executed — as well as the impact of these jobs on MLCs. This means non-mainframe experts can manage costs as easily as prior platform experts.

5. an ongoing focus and emphasis must be placed on recruiting and cultivating top computer science talent for the mainframe

It's a polyglot world and millennials fluent and skilled in working on the mainframe will have a distinct advantage. Working on the mainframe also gives newer developers an opportunity to contribute to some of the most exciting, cutting-edge software products being created today. We recently had the experience of meeting with a class full of young computer science grads and far from dismissing the notion of cultivating mainframe expertise, their excitement and enthusiasm for learning more was palpable.

The mainframe can be an extremely valuable asset, giving DevOps teams — and the organizations they work for — the distinct advantage of being both big and fast. Heavier workloads combined with less mainframe talent will certainly present challenges, though these are not insurmountable. The five take-aways described here are an excellent way to amplify the mainframe's intrinsic and unique attributes as well as mainframe resources in a DevOps world.

Chris O'Malley is CEO of Compuware
Share this

Industry News

September 24, 2020

NetApp announced the availability of Elastigroup for Microsoft Azure Spot Virtual Machines (VMs).

September 24, 2020

CloudBees announced a robust new set of DevSecOps capabilities for CloudBees CI and CloudBees CD. The new capabilities enable customers to perform early and frequent security checks and ensure that security is an integral part of the whole software delivery pipeline workflow, without sacrificing speed or increasing risk.

September 24, 2020

Pulumi announced the release of a Pulumi-native provider for Microsoft Azure that provides 100% coverage of Azure Resource Manager (ARM), the deployment and management service for Azure that enables users to create, update and delete resources in their Azure accounts.

September 23, 2020

Puppet announced new Windows services, integrations and enhancements aimed at making it easier to automate and manage infrastructure using tools Windows admins rely on. The latest updates include services around Group Policy Migration and Chocolatey, as well as enhancements to the Puppet VS Code Extension, and a new Puppet PowerShell DSC Builder module.

September 23, 2020

Red Hat announced the release of Red Hat OpenShift Container Storage 4.5, delivering Kubernetes-based data services for modern, cloud-native applications across the open hybrid cloud.

September 23, 2020

Copado, a native DevOps platform for Salesforce, has acquired ClickDeploy.

September 22, 2020

CloudBees announced general availability of the first two modules of its Software Delivery Management solution.

September 22, 2020

Applause announced the availability of its Bring Your Own Testers (BYOT) feature that enables clients to manage their internal teams – employees, friends, family members and existing customers – and invite them to test cycles in the Applause Platform alongside Applause’s vetted and expert community of testers.

September 22, 2020

Kasten announced the integration of the K10 data management platform with VMware vSphere and Tanzu Kubernetes Grid Service.

September 21, 2020

PagerDuty entered into a definitive agreement to acquire Rundeck, a provider of DevOps automation for enterprise.

September 21, 2020

Grafana Labs announced the release of Grafana Metrics Enterprise, a modern Prometheus-as-a-Service solution designed for the scale, architecture, and security needs of enterprises as they expand their observability initiatives.

September 21, 2020

Portshift's Cloud Workload Protection platform is now available through the Red Hat Marketplace.

September 17, 2020

env0, a developer of Infrastructure-as-Code (IaC) management software, announced the availability of its new open source solution for Terraform users, Terratag.

September 17, 2020

Push Technology announced a partnership with Innova Solutions, an ACS Solutions company, specializing in global information technology services.

September 17, 2020

Alcide achieved the AWS Outposts Ready designation, part of the Amazon Web Services (AWS) Service Ready Program.