DevOps on the Mainframe
October 26, 2015

Chris O'Malley
Compuware

Most discussions of DevOps assume that the "dev" is being done exclusively in programming languages of recent vintage and that the "ops" are occurring exclusively on distributed or cloud platforms.

There are, however, at least three compelling reasons to have a DevOps discussion that focuses on the mainframe.

Reason #1: Necessity

Much, if not most, of the world's most economically valuable code still runs on the mainframe in languages such as COBOL, PL/I, and Assembler. A lot of people fail to acknowledge this reality — but as we eagerly hail rides and order pizzas on our smartphones, global banks and other major corporations are executing billions of transactions worth trillions of dollars using their so-called "legacy" systems.

These systems are not going anywhere. No matter how often and how loudly myopic industry pundits may predict the demise of the mainframe, the empirically verifiable truth is instead that mainframe owners have no plans to jettison the platform. Most, in fact, see their mainframe workloads growing as their businesses grow and as they add new logic to their systems of record.

Plus, the mainframe platform itself has evolved dramatically in recent years — despite lack of attention from the trade press. IBM's z13 is the most powerful, reliable, scalable, and secure computing platform on the planet. It also runs Linux and Java. And despite misconceptions to the contrary, its incremental costs for additional workloads are far less than those for distributed and cloud environments.

It simply doesn't make sense to leave such a massive volume of high-value application logic running on such a powerful platform out of the DevOps discussion. If it is worthwhile to apply DevOps best practices to the code that lets us "like" our cousin's neighbor's classmate's baby pictures, it is reasonable to conclude that there may be equal or greater value in applying those same practices to the code that empowers international trade and currency exchange.

Reason #2: Uniqueness

There is nothing inherently unique about applying DevOps best practices to the mainframe. Code is code. So the only inherent difference between managing the lifecycle of a COBOL app and the lifecycle of a Java app is the programming syntax — which is cognitively trivial.

There are, however, significant conditional differences that make DevOps on the mainframe a unique challenge. For one thing, COBOL programs are typically long, involved and not very well documented. Because of their longevity, these applications have also undergone a lot of modification and become deeply intertwined with each other. This makes code-parsing, runtime analysis, and visual mapping of inter-application dependencies much more important in the mainframe environment than they usually are in Java/C++/etc. environments.

For another, the tools that mainframe development teams have historically used — and that they still use in 9 out of 10 cases — are very unlike those being used by today's more freshly tooled Java-centric development teams. So inclusion of pre-Java application logic in the broader enterprise DevOps environment will usually require substantive smart re-tooling of mainframe code management.

Finally, the cultural shift to DevOps, Agile, and continuous delivery can initially be a much greater one for mainframe shops that have been focused for decades (with good reason) on application stability and hyper-rigorous change management, rather than on efficient scrumming and rapid response . This cultural shift places special demands on IT leadership — above and beyond the other process and technology change required for best-practices DevOps.

None of these are insurmountable obstacles to bringing DevOps to the mainframe (or, perhaps more precisely, bringing the mainframe to DevOps). But they do represent a unique set of near-term challenges that require their own discussion, strategy, actions, tools, and leadership.

Reason #3: Significant business upside

The third and most compelling reason to give DevOps on the mainframe its own dedicated focus is that a business gains tremendous advantages when it can more adaptively and efficiently re-aligning its COBOL code base with the ever-changing dictates of the world's increasingly technology-centric markets.

That code base, after all, is the digital DNA of the business. It defines how the business is operated, measured, and managed. So no business with a substantial mainframe environment can successfully compete in today's fast-moving markets if that environment remains slow and unresponsive.

Conversely, a mainframe-centric company that does manage to bring DevOps to the mainframe will be able to out-maneuver mainframe-centric competitors who fail to do likewise.

This is especially true as mainframe applications increasingly act as back-ends for customer-facing mobile apps and customer analytics. Companies that can adaptively update their mainframe code will have a distinct advantage when it comes to customer engagement, because they will be able to deliver better mobile apps and get more relevant analytic results.

The advantages of the DevOps-enabled mainframe, though, go well beyond more adaptive COBOL code. The mainframe platform is the most cost-effective place to host any application logic that has to be fast, scalable, reliable, and secure. So IT organizations creating new workloads can reap massive economic advantages from running those workloads on the mainframe.

But they won't run those workloads on the mainframe if they can't easily modify and extend those applications as circumstances require. DevOps-enablement of the mainframe is therefore a prerequisite for taking advantage of the mainframe's superior technical performance and economics.

There's also a fourth compelling reason for elevating the DevOps-on-mainframe discussion: Forward-thinking IT organizations are already successfully doing DevOps on the mainframe — and reaping the considerable associated rewards. So the mainframe DevOps discussion is not just theoretical. It is also practical and actionable. And it starts delivering ROI quickly.

So if you have a mainframe and have been leaving it out of your DevOps initiatives, stop. You are robbing your business of a real source of significant competitive advantage.

And if you don't have a mainframe, pay attention anyway. It may be worth getting one — or offering your talents to a company that does. Mainframes have been around for a long time, and they will be around for a long time to come.

Who knows? Mainframes may even outlast the on-premise x86 commodity server infrastructure that was once touted to replace it, but that is not aging nearly as well — and that may therefore wind up expiring way before the mainframe ever does.

Chris O'Malley is CEO of Compuware.

The Latest

January 17, 2019

To better align business and IT objectives, enterprise organizations should focus on the core "problems" that individual business units face today in driving out real consumer value. Until the roadblocks and inhibitors — and, ultimately, the resultant technical debt — are removed from the equation, large enterprise organizations will continue struggling to succeed ...

January 16, 2019

Technical debt is what results when legacy platforms or highly integrated and dependent systems and processes inhibit large enterprise organizations from meeting the needs of internal business stakeholders. In many cases, the core objectives that drive real, monetizable business value are not aligned to the esoteric IT goals of "automation" and "Agile development." This creates a fundamental disconnect between business and IT ...

January 14, 2019

Budget season is an important time of the year for businesses because it gives senior IT and security leaders time to reflect on what went right this year and what initiatives need to be given priority in the new year. Recent research from Threat Stack shows security budgets are expected to increase by 19 percent over the next two years, but business leaders are still facing challenges determining where to allocate this budget in the face of rapidly evolving infrastructure ...

January 10, 2019

As organizations of all sizes are embracing hybrid and multi-cloud infrastructures, they are experiencing the many benefits of a more agile, distributed and high-speed environment where new applications and services can be built and delivered in days and weeks, rather than months and years. But as the adoption of these next generation architectures continues to grow, so do the complexities of securing the cloud workloads running on them ...

January 09, 2019

DEVOPSdigest invited DevOps experts for their predictions on how DevOps and related technologies will evolve and impact business in 2019. Part 9, the final installment, covers microservices, containers and APIs ...

January 08, 2019

DEVOPSdigest invited DevOps experts for their predictions on how DevOps and related technologies will evolve and impact business in 2019. Part 8, covers microservices and containers ...

January 07, 2019

DEVOPSdigest invited DevOps experts for their predictions on how DevOps and related technologies will evolve and impact business in 2019. Part 7, covers the Cloud ...

December 20, 2018

DEVOPSdigest invited DevOps experts for their predictions on how DevOps and related technologies will evolve and impact business in 2019. Part 6, covers DevOps Analytics, including AI and Machine Learning ...

December 19, 2018

DEVOPSdigest invited DevOps experts for their predictions on how DevOps and related technologies will evolve and impact business in 2019. Part 5 is all about testing ...

December 18, 2018

DEVOPSdigest invited DevOps experts for their predictions on how DevOps and related technologies will evolve and impact business in 2019. Part 4 covers Agile, CI/CD and automation ...

Share this