IT Testing: What's Appropriate for Replatforming Projects
April 20, 2017

Craig Marble
Astadia

Testing is an important component of any IT project. Releasing a serious flaw to production in a business-critical application can have disastrous effects. At best, it's a nuisance that annoys users and reduces confidence in an organization's competence. At worst, it can bring an entire company to its knees and result in losing customers and investors. In extreme cases, even the very existence of an organization could be threatened.

So, one could argue that testing is the most important phase of an IT project. It's also time-consuming and expensive. It's essential to strike a balance between an IT testing program that ensures a quality product and the cost-to-value ratio of your project. But when you're dealing with replatforming projects, how much testing is enough testing?


Replatforming: Putting New Tires on a Car

First, let's clarify what we mean by replatforming. Simply put, replatforming projects move an application from one operating platform to another. The most obvious example is moving an application from a mainframe to open systems or the cloud. This process is also referred to as a forklift or lift-and-shift approach. Whatever it's called, why would you take this approach rather than rewriting or replacing with a package or service?

The answer is simple. Many of the applications targeted for replatforming represent significant investment and business differentiation. Replatforming is a great way to leverage that investment and retain the proven business logic that differentiates you from your competition. It's kind of like putting new tires on a car.

When tires lose their tread, they introduce significant risk and expense. They decrease gas mileage, they increase the likelihood of skidding in bad weather, they can affect your ability to stop quickly in an emergency, and they're much more prone to blowouts. But you don't buy a new car just because your tires are worn out, do you? Of course not. You get new tires when the old ones reach end-of-life and are no longer sustainable. The same is true for operating platforms.

To Test or Not to Test

Legacy platforms and burning platforms are not just an excessive expense that impacts the bottom line of your business. They can pose significant risk in the form of unsupported hardware and software as well as an inability to find resources with the necessary skills to administer those platforms. The fastest, cheapest way to address this problem is to replatform your critical, bread-and-butter applications to contemporary environments like open systems and cloud. Of course, doing this requires IT testing to make sure the applications are functionally equivalent and perform as well or better on the new platform.

Some of our clients prefer to err on the side of caution and insist on what I call fully-loaded testing. That is, they require testing every line of code and every path of logic. While their commitment to quality is admirable, the result is disproportionately high project costs and extended project timelines. This type of extreme IT can even kill the ROI of your modernization project.

When replatforming an application, you can be confident that the business logic that works reliably today will continue to work just as reliably on the new platform. For example, business logic of an existing or COBOL-centric or C/C#/Java application running on a mainframe or an open systems server today will continue to run on a cloud platform. Testing a replatformed application should be limited to the areas that have been modified to accommodate the target environment and the application's performance on the new platform.

IT Testing Optimized for Replatforming

One of the benefits of reusing applications on a new, lower-cost platform is a relatively small portion of the application needs to be modified. The amount of change depends on many factors, including the application's current platform. One of the most obvious changes is source code that accesses the new database. The extent of these changes depends on your target environment — like moving a database from mainframe to cloud or transforming flat-file or hierarchical data to RDBMS. In addition to EBCDIC-to-ASCII conversion, you may have to transform compressed data into a format suitable for the target environment. You may also have to modify sorting routines from EBCDIC-based sorting to ASCII-based sorting.

While these and other areas of your application will require code modification, you should focus your testing efforts on the areas of change and performance rather than the entire code base. It's also important to implement efficient issue-tracking software and processes. Our experience has shown that because many legacy applications have been running reliably for decades and the number of incidents is relatively low, some organizations have gotten a little lazy when it comes to tracking incidents through the resolution process. You'll need to dust off these tools and processes and get them back into shape to support your replatforming project.

As with any project, testing your replatformed applications is important. The point here is to maximize the ROI of your replatforming project while ensuring you deliver a quality product. Think about it this way; when you put new tires on your car, do you test the headlights, turn signals, audio/video system and air conditioning? Of course not. They weren't modified. Consider taking the same approach when testing replatformed applications.

Craig Marble is Sr. Director, Legacy Modernization Services at Astadia.

The Latest

December 14, 2017

Around one in five business leaders indicating that their software budget had increased 50 percent or more over the past three years to support digital transformation projects. However, the increased software development investment has not translated to greater security budgets or awareness of the security risks insecure software introduces: only 50 percent of business leaders surveyed understand the risk that vulnerable software poses to their business, according to Securing the Digital Economy, a report from Veracode ...

December 13, 2017

Metrics-oriented thinking is key to continuous improvement – and a core tenant of any agile or DevOps philosophy. Metrics are factual and once agreed upon, these facts are used to drive discussions and methods. They also allow for a collaborative effort to execute decisions that contribute towards business outcomes ...

December 11, 2017

The benefits of DevOps are potentially enormous, but simply identifying the benefits is not enough. A faster time to market may be a good customer story, but with no directly measurable monetary return, the value of DevOps can still be questioned at board level. Businesses want more than promises if they are to sign off on financial decisions: they need to know the Return on Investment (ROI) as well, with facts and figures that demonstrate what they will gain ...

December 07, 2017

Modern businesses are migrating to a cloud-based model for hosting sensitive data to reap the benefits of agility and cost savings as well as to keep pace with customer demand. Cloud-Native methodologies such as DevSecOps, continuous delivery, containers and micro-services are essential building blocks in the digital business revolution. However, moving information and technologies from hardware to software poses a security concern – translating to a top challenge for both IT and the C-level, as applications built on top of micro-services and containers in a Cloud-Native environment utilize a wide variety of secrets for their proper functioning ...

December 06, 2017

There was a time in cybersecurity strategy when most IT leaders considered perimeter and endpoint guards like antivirus and authentication controls to be the sum of network protection. But as attacks continue to increase in frequency and sophistication, leaders and DevOps teams have been focusing on the role of backup and disaster recovery in mounting a strong defense ...

December 04, 2017

In this blog I will summarize and share with you some wisdom about the biggest problem – okay, problems – in the field of software testing right now. While this is not an exhaustive list, these four bad habits have emerged as the predominant themes ...

December 01, 2017

The majority of testers – 63 percent – are responsible for both API and UI testing, according to the State of Testing 2017 Survey conducted by SmartBear Software. With the growth of methodologies like Agile and DevOps, testing teams have been shrinking and the line between roles increasingly blending ...

November 29, 2017

Companies today face a digital dilemma. How can they understand and discern if their approach to transforming their company to meet today's digital consumer is the right one? ...

November 27, 2017

It has been argued that Dev and Ops teams should work more closely together for some time. For many, the benefits of a closer relationship are clear, and the debate has moved on from if to how, but for lots of companies there are several types of walls to tear down ...

Share this