Red Hat announced new capabilities and enhancements across its portfolio of open hybrid cloud solutions aimed at accelerating enterprise adoption of edge compute architectures through the Red Hat Edge initiative.
Toyota runs one of the most sophisticated and efficient production systems in the world. This is due in large part to a device called an "andon cord" on every production line that gives any team member the authority to halt production when a quality or process problem is found, until it is resolved.
One can hardly imagine Toyota denying those directly responsible for production a means of detecting and flagging defects. But this is exactly what happens in software delivery when mainframe developers don't have the tools to perform early and continuous testing.
Empowering each team member with the ability to identify and resolve defects at any point in the manufacturing process helps Toyota ensure high quality and reliability amidst the introduction of new technological innovations. Similarly, empowering all developers to perform unit testing — the type of testing performed at the earliest point in the software development cycle — can ensure high quality software delivery.
Why Mainframe Testing Must Shift Left and Be Automated
Most innovative web-based applications ultimately depend on finely tuned mainframe code to complete their basic functions. This tuning should begin with individually and independently scrutinizing the smallest part — the unit — of an application for proper operation. Just as Toyota empowers employees to maintain quality and reliability with andon cords, automating unit testing enables development teams to find and fix problems as early as possible by shifting testing left. This prevents defects from being baked into software and becoming more costly and time-consuming to undo later in the "production system."
After unit testing, developers and/or QA testers should be empowered with automated functional testing to test a slice of functionality of a larger system. These tests ensure that the application will do what users expect it to do and ultimately what the business and customers expect the software to accomplish.
Without automation, the overall software release pipeline must decelerate to accommodate the mainframe's manual testing processes. The other option is to sacrifice time and effort spent on testing, leading to the rollout of unstable, unusable products. Shifting testing left enables DevOps teams to move quickly without compromising quality.
Eventually, a lack of essential mainframe testing automation comes to resemble what happens in the famous "I Love Lucy" chocolate factory scene, where one vital step in a process — in Lucy's case, the wrapping of chocolates — cannot keep pace with the others.
Enabling Automated Testing on the Mainframe
Enabling automated testing of mainframe code requires a modern tool that should offer sophisticated features within a preferred developer experience, including the:
■ Automated creation and triggering of tests
■ Automated collection of virtualized test data
■ Identification of overall mainframe code quality trends
■ Sharing and reusing of test assets
■ Creation of repeatable tests
■ Enforcement of testing policies
Code coverage is another critical component of automated testing. Code coverage metrics provide insight into the degree to which source code is executed during a test — that is, which lines of code have or have not been executed, and what percentage of an application has or has not been tested. These measurements allow IT teams to understand the scope and effectiveness of their testing as the code is promoted towards production.
Like automated testing, code coverage on the mainframe has traditionally been a challenge, since metrics would identify when mainframe code was tested, but not drill down to provide data on the actual percentage/portions of that mainframe code. Using an automated testing tool that integrates this more specific information allows developers to more quickly and accurately spot areas of uncovered code that need attention, just as they do in Java.
Code quality management across platforms is extremely valuable to mainframe-backed organizations, since their ability to bring new digital deliverables to market is often contingent on simultaneously updating code across both back-end mainframe systems of record and front-end mobile/web systems of engagement. Incorporating mainframe code into cross-platform code quality initiatives brings a higher degree of confidence and rigor to end-to-end testing processes.
Treat the Mainframe Like Your Engine It Is
While new technologies have emerged to address previously unsolvable business problems, the mainframe has endured as the engine propelling large enterprise IT, because no other platform can match its security, availability and scalability in meeting the workload challenges brought on by the mobile explosion.
According to BMC's most recent mainframe user survey, 59 percent of respondents saw increasing mainframe transaction volumes in 2018, while 92 percent predict long-term viability for the platform — the third year in a row of increases and the highest level since 2013.
Recognizing the vital role that mainframe code continues to play, BMC's survey says almost half of mainframe users are now applying DevOps practices to their mainframe environments. But bringing the mainframe up to the speed and pace of DevOps requires mainframe code to be fully included, as early as possible, and across the entire DevOps toolchain. Automating testing is an essential and required step in enabling that shift.