Building Confidence in Automation
Part 2 of a three-part series on introducing and building test automation into your application development and deployment pipeline
April 15, 2019

Drew Horn
Applause

In Part 1 of this three-part series, we covered the first steps of introducing automated testing into your software development lifecycle. Now that you've done the early work of codifying manual tests into an automation framework and achieved some quick wins with initial smoke tests, you can continue to build confidence in test automation.

Start with Part 1: Beginning Your Test Automation Journey

Working with Test Case Management Tools

The most important competency of this intermediate SDLC stage is that you establish a test case management system (TCM) and reporting structure. It's key to make sure that all of your results are going to a single place to be seen from a single view. By doing that, you have a consistent view into any failures and can comfortably decide whether or not to deploy.

Furthermore, this approach allows you to merge your manual and automation testing efforts into a single system, with a single source of truth. As these two processes are merged together (which we will cover later in this article) you will enable your practice to scale properly without hitting unnecessary bottlenecks.

With a single TCM in place, you can now more effectively put quality gates in place as part of your deployment pipeline. Each testing stage in the pipeline (e.g. smoke testing, manual regression, test automation) should have a quality gate defined that determines whether or not the build should continue through the pipeline for additional testing. Implementing quality gates at each stage of the process helps your team identify build issues earlier. The earlier build issues are identified, the more cost-optimized your practice will be. This is especially important as you scale your practice and increase test coverage.

Unifying Manual and Automated Testing

After your test cases have been merged into a single repository and an assessment has been completed to determine which tests should be run manually, and which should be automated, it becomes much easier to combine your manual and automation testing efforts together and embed them into your deployment pipeline. This is a must for any production-grade QA practice looking to scale.

While the process varies from team to team, in general, embedding automated and manual testing together into your deployment pipeline can be seen as a four-step process:

1. Define a change set given your evaluation and goals– More specifically, what technical changes to your deployment pipeline and/or your manual processes need to be implemented or documented in order to embed all testing into the pipeline? For example, once an automated smoke test is complete, should a QA lead be notified so that they can initial manual testing? When manual testing is done, how does a key stakeholder review the results and determine if the quality gate should allow the build to the next step in the pipeline for additional downstream (possibly non-functional) testing? These are the types of questions that should be answered so you have a clear playbook on how to move forward.

2. Test the solution out-of-band– Changes to your pipeline and processes should also be tested. A great way to test your new process without impacting the current workflow is to do it out-of-band. For example, you could build a job on your CI server that runs automated regression tests but does not impact the existing pipeline flow. Doing so allows you to review the process and iterate as needed until all teams are ready to move such a process directly inline.

3. Train your team on the process– There will almost always be manual processes, so it is important to solidify these processes by training your team throughout the development, integration, staging, production, and feedback stages.

4. Implement changes into CI pipeline– Finally, once the changes to the pipeline and processes have been vetted and all teams are trained, you can make the switch.

Scaling

The last component of getting your automation out of an initial or beginner stage is starting to scale. With the above tools and processes in place, you should feel comfortable adding more automated tests and platforms. As your test matrix increases and the frequency of runs increases, executing tests in parallel becomes a high priority. Ideally, your automation framework should support the ability to execute tests in parallel. A challenge is making sure the tests that your team has developed work well when run at the same time. To do this, make sure your tests are as atomic and idempotent as possible. The state of the application after each test should (if possible) be the same as when it started. If this isn't possible, try to set up some test data for your tests in a way that each test relies on its own data. If test data used in one test can impact another, you will have a very difficult time debugging test failures.

If your framework doesn't support running tests in parallel, you could also set up separate jobs on your CI server to run groups of tests at the same time. This works, but generally adds additional complexity to your pipeline that could instead be encapsulated in your framework.

Read Part 3: Achieving Expert Status in Test Automation.

Drew Horn is Senior Director of Automation at Applause
Share this

Industry News

October 03, 2023

Parasoft announced new advancements in its Continuous Quality Platform for functional solutions, which include Parasoft Virtualize, SOAtest, CTP, and DTP.

The latest releases introduce capabilities including:

- GenAI integration for API testing

- Comprehensive microservices code coverage

- Web accessibility testing

- Powerful learning mode for creating and updating virtual assets

These innovations are set to transform the landscape of software testing for enterprise application development and test teams.

October 03, 2023

LinearB announced the release of free DORA Metrics dashboards.

October 03, 2023

PerfectScale, a provider of Kubernetes optimization, has successfully closed $7.1 million in seed funding.

October 02, 2023

Spectro Cloud announced Palette EdgeAI to simplify how organizations deploy and manage AI workloads at scale across simple to complex edge locations, such as retail, healthcare, industrial automation, oil and gas, automotive/connected cars, and more.

September 28, 2023

Kong announced Kong Konnect Dedicated Cloud Gateways, the simplest and most cost-effective way to run Kong Gateways in the cloud fully managed as a service and on enterprise dedicated infrastructure.

September 28, 2023

Sisense unveiled the public preview of Compose SDK for Fusion.

September 28, 2023

Cloudflare announced Hyperdrive to make every local database global. Now developers can easily build globally distributed applications on Cloudflare Workers, the serverless developer platform used by over one million developers, without being constrained by their existing infrastructure.

September 27, 2023

Kong announced full support for Kong Mesh in Konnect, making Kong Konnect an API lifecycle management platform with built-in support for Kong Gateway Enterprise, Kong Ingress Controller and Kong Mesh via a SaaS control plane.

September 27, 2023

Vultr announced the launch of the Vultr GPU Stack and Container Registry to enable global enterprises and digital startups alike to build, test and operationalize artificial intelligence (AI) models at scale — across any region on the globe. \

September 27, 2023

Salt Security expanded its partnership with CrowdStrike by integrating the Salt Security API Protection Platform with the CrowdStrike Falcon® Platform.

September 26, 2023

Progress announced a partnership with Software Improvement Group (SIG), an independent technology and advisory firm for software quality, security and improvement, to help ensure the long-term maintainability and modernization of business-critical applications built on the Progress® OpenEdge® platform.

September 26, 2023

Solace announced a new version of its Solace Event Portal solution that gives organizations with Apache Kafka deployments better visibility into, and control over, their Kafka event streams, brokers and associated assets.

September 26, 2023

Reply launched a proprietary framework for generative AI-based software development, KICODE Reply.

September 26, 2023

Harness announced the industry-wide Engineering Excellence Collective™, an engineering leadership community.

September 25, 2023

Harness announced four new product modules on the Harness platform.