Achieving Expert Status in Test Automation
Part 3 of a three-part series on introducing and building test automation into your application development and deployment pipeline
May 06, 2019

Drew Horn
Applause

We outlined how to get started with test automation in parts one and two of this series. Now, we'll finish with what it takes to achieve an advanced level of maturity in your automation practice.

Start with Part 1: Beginning Your Test Automation Journey

Start with Part 2: Building Confidence in Automation

As we've stated throughout this series, the end goal of test automation is to enable frictionless, continuous testing in a high-throughput deployment pipeline. As you progress through the beginner and intermediate stages of test automation, you should notice a gradual increase in efficiency and release velocity. However, following a templated approach to test automation will only take you so far.

The expert stage of test automation is all about continuous optimization. More specifically, this phase is about collecting data about your existing process, analyzing that data to derive quality insights, applying those insights to improve your practice and then measuring these improvements as part of repeating the cycle again. There are three key steps to realize continuous optimization.

Step 1: Do Just Enough Testing at Each Phase of Deployment

To prepare yourself for successful continuous optimization, you should first take a step back to ensure you are doing the right amount of testing at each stage of your deployment process. How much unit and initial integration testing are you doing? How many smoke and sanity tests are running sooner rather than later to ensure which builds are stable, and which warrant additional downstream testing? When are you running your regression tests and your later-stage manual tests? Where does your non-functional or other costly testing fit in your pipeline?

It is important to analyze your pipeline and verify you are doing just enough testing at each stage because it allows you to halt if you have an issue at a particular stage. This testing approach is really the first step of continuous optimization because it is both cost effective and establishes multiple, measurable milestones in your testing pipeline to measure. If you spread the process out and test incrementally, you can start collecting data at every single stage. Make sure to have quantifiable quality gates at each of these stages. This helps drive which measurements to take during the testing process.

Step 2: Collecting Metadata About Your Testing Process

At each phase of testing, think about what data you can collect and feed into a repository so you can mine it later. Focus on at least these key questions while you are implementing your metadata collection strategy:

■ What stage of the testing process are we looking at?

■ What build or milestone is under test?

■ How many tests were run?

■ How long did each test take?

■ What platforms were tested on?

■ Which tests passed and which ones failed?

■ Is the ratio of passed-to-failed tests acceptable for that particular quality gate?

■ How long is it taking to triage automated test failures?

■ Was the build kicked back or is the deployment process continuing?

■ What bugs were associated with this build?

Collecting test metadata that answers questions like these at each phase allows teams to compile substantial insights in the future. Especially when munging this data with data from other teams (e.g. engineering, marketing, etc.).

Step 3: Making Data-Driven Decisions

Now that you have captured data about your testing process, you can organize and visualize it using tools like Splunk or Domo to make it more digestible.

Once you have your data in a dashboard, it’s time to actually do something with it. You may, for example, look at your data and determine that a subset of your automated tests are not providing the right value for your team. This is a common situation where a few very complex tests have been automated but don’t run reliably. By collecting the data mentioned above, you should be able to measure the impact such unreliable tests are having on your release process. You may instead try adding those tests to the manual suite and then measuring how that improves your test times.

To take things a step further, you can also incorporate data from other departments into your insights to further refine your testing strategy. For example, consider munging development code coverage data into your quality decisions. This can help you literally visualize what your testing triangle looks like. Also consider pulling marketing insights into your datasets to cross reference real-time customer usage data with your testing strategy. Your customer’s usage patterns will change and evolve over time as your application grows and new features are introduced. It’s important to stay on top of how those usage patterns change so that you can quickly and continuously adjust your testing strategy appropriately.

Remember that improving your automation practice is an unending process. You are never truly done. There is always more that can be done. Following these steps while incorporating the lessons you’ve learned along the way will help you continually optimize your automation practice.

Drew Horn is Senior Director of Automation at Applause
Share this

Industry News

October 06, 2022

Platform.sh announced it has partnered with MongoDB.

October 06, 2022

Veracode announced the enhancement of its Continuous Software Security Platform to include container security.

This early access program for Veracode Container Security is now underway for existing customers.

The new Veracode Container Security offering, designed to meet the needs of cloud-native software engineering teams, addresses vulnerability scanning, secure configuration, and secrets management requirements for container images.

October 06, 2022

Mirantis announced that Mirantis Container Runtime – latest generation of the Docker Enterprise Engine, the secure container runtime that forms the foundation of Mirantis Container Cloud and Mirantis Kubernetes Engine and is used at the heart of many other Kubernetes deployments – is now available in the Microsoft Azure Marketplace.

October 05, 2022

Perforce Software announced enhanced support for automated testing with the release of Helix ALM 2022.2.

October 05, 2022

Parasoft announced the latest releases of its API and microservices testing tools, including SOAtest, Virtualize, CTP, and Selenic.

October 05, 2022

Vaadin announced the release of four Acceleration Kits designed to make it faster and easier to build and modernize Java applications for enterprise use.

October 04, 2022

Pegasystems announced the latest release of Robot Studio, the robotic process automation (RPA) low-code authoring environment for Pega's intelligent automation platform.

October 04, 2022

EvolveWare announced the Agile Business Rules Extraction (Agile BRE) solution on its Intellisys platform.

October 04, 2022

Mabl announced new features that empower quality professionals to easily validate APIs as part of their integrated end-to-end tests.

October 03, 2022

Spectro Cloud announced a major new release of its Palette Edge platform.

October 03, 2022

Arcion announced agentless change data capture (CDC) for all of its supported databases and applications.

September 29, 2022

CloudBees announced the acquisition of ReleaseIQ to expand the company’s DevSecOps capabilities, empowering customers with a low-code, end-to-end release orchestration and visibility solution.

September 29, 2022

SmartBear continues expanding its commitment to the Atlassian Marketplace, adding Bugsnag for Jira and SwaggerHub Integration for Confluence.

Bugsnag developers monitoring application stability and documenting in Jira no longer need to interrupt their workflow to access the app. Developers working in SwaggerHub can use the macro to push API definitions and changes directly to other teams and business stakeholders that work within Confluence. By increasing the presence of SmartBear tools on the Atlassian Marketplace, the company continues meeting developers where they are.

September 29, 2022

Ox Security exited stealth today with $34M in funding led by Evolution Equity Partners, Team8, and M12, Microsoft's venture fund, with participation from Rain Capital.

September 29, 2022

cnvrg.io announced that the new Intel Developer Cloud is now available via the cnvrg.io Metacloud platform, providing a fully integrated software and hardware solution.