Test or Automated Check? How to Strategically Implement DevOps Automation
June 25, 2019

Malcolm Isaacs
Micro Focus

DevOps thrives on automation, and it's clear why: manual processes are slow, error-prone, and inconsistent. In practice, automation rates, especially for testing, are very low, so achieving effective test automation can be challenging, but there are steps that you can take to increase test automation in your organization.

Continuous Integration Needs Continuous Testing

Before diving into what you can do to increase testing automation, it's important to understand the intersection of continuous integration and continuous testing as both affect automation implementation.

Continuous Integration (CI) is a DevOps practice whereby code changes are pushed to a common code base many times every day, and are automatically integrated with the rest of the code base. This frequent merging activity carries the risk of destabilizing the code base and introducing defects into the software delivery pipeline.

To find out quickly if a change causes problems, teams use a Continuous Testing (CT) approach, whereby a code change automatically runs a set of tests, designed to provide speedy feedback to the developers.

Checking if a recent change breaks functionality that worked earlier is known as regression testing, and is an essential part of the CI pipeline, triggered whenever a change is committed, which can be many times each day.

In addition to regression testing, CT should check that new functionality works as expected, that it is easy to use, and is resilient to unexpected circumstances. It should also cover aspects such as application performance and security.

Continuous Testing Needs Test Automation

The team must get feedback on their changes as quickly as possible, but with so many tests to run, it is not practical to test manually. The solution is to automate the tests, so that they can run unattended, and complete tests quickly and reliably.

However, automating everything is not always possible. For example, evaluating the user experience of a new feature requires a subjective examination. We need to study and explore the new feature before we can determine if the user experience is satisfactory. Many teams turn to Exploratory Testing, which involves manually playing with the system, learning about it as you go, and applying the knowledge gained to conduct further testing.

Although the goal should be to automate whatever can be automated, there's no magic wand to "automate all the things."

Some of the challenges with test automation are:

■ Applications change frequently, requiring automation scripts to be constantly updated

■ Difficulty of provisioning stable test environments with appropriate test data

■ Lack of skilled automation engineers

Nevertheless, organizations are facing up to these challenges and looking to grow their automation. A word of caution though: Jez Humble and David Farley, in their book Continuous Delivery, recommend a gradual approach to automation, rather than attempting to automate everything at once.

Be Smart About Test Automation

Here are some tips to help you increase your test automation.

1. Start with tests that are small, focused, and run quickly

Tests should be small, independent, self-contained, check a very specific area of the code, and complete within a very short amount of time. These tests, known as unit tests, should be the first tests to run. You can remove dependencies on external or unavailable services and components by emulating them using a service virtualization tool. Once these tests have passed, more extensive testing can be performed.

2. Automate security testing and performance testing early on

Security testing and performance testing are often run late in the cycle, increasing the risk of serious issues being detected when they're more difficult to fix. To get quick feedback, design short load tests that run with each build, and include static application security tests that focus on the code that changed. You can run more comprehensive tests after the initial tests have run, or in parallel. The key is to provide valuable feedback as early as possible.

3). Put developers and testers on the same team, with the same tools

Automation rates increase when testers and developers work together. When they use the same tools and technologies for development and testing, communication becomes more efficient, automation becomes aligned with objectives, and issues can be identified and resolved more quickly. Consider practicing Test Driven Development (TDD), whereby tests are written first, and only then code is written to pass the test. This can reduce ambiguity, and ensure that developers and testers are on the same page.

4. Make your metrics visible to the team through a real-time dashboard

Establish a dashboard showing each build and pipeline, and what stage they are at. Include the status of tests that are running, which ones have passed and failed, and how long they are taking to run. Visibility into the pipelines through simple-to-understand, real-time graphs and charts on a central dashboard provides critical information that helps teams proactively identify and resolve issues.

What's Next for Test Automation?

The assertion that test automation isn't suited for in-depth explorations of a system is increasingly being challenged by artificial intelligence. AI and machine learning (ML) will help computers to "learn" how a webpage behaves, and develop, optimize, and maintain scripts that can test the page and how it interacts and behaves with the rest of the system — all without human intervention.

AI and ML are pushing the boundaries of automation. There will be significant advances over the next few years as the technology matures, blurring the lines between automated checking and intelligent testing.

Malcolm Isaacs is Senior Solutions Manager, Application Delivery Management, at Micro Focus
Share this

Industry News

May 14, 2025

The Linux Foundation, the nonprofit organization enabling mass innovation through open source, announced the launch of the Cybersecurity Skills Framework, a global reference guide that helps organizations identify and address critical cybersecurity competencies across a broad range of IT job families; extending beyond cybersecurity specialists.

May 14, 2025

CodeRabbit is now available on the Visual Studio Code editor.

The integration brings CodeRabbit’s AI code reviews directly into Cursor, Windsurf, and VS Code at the earliest stages of software development—inside the code editor itself—at no cost to the developers.

May 14, 2025

Chainguard announced Chainguard Libraries for Python, an index of malware-resistant Python dependencies built securely from source on SLSA L2 infrastructure.

May 14, 2025

Sysdig announced the donation of Stratoshark, the company’s open source cloud forensics tool, to the Wireshark Foundation.

May 13, 2025

Pegasystems unveiled Pega Predictable AI™ Agents that give enterprises extraordinary control and visibility as they design and deploy AI-optimized processes.

May 13, 2025

Kong announced the introduction of the Kong Event Gateway as a part of their unified API platform.

May 13, 2025

Azul and Moderne announced a technical partnership to help Java development teams identify, remove and refactor unused and dead code to improve productivity and dramatically accelerate modernization initiatives.

May 13, 2025

Parasoft has added Agentic AI capabilities to SOAtest, featuring API test planning and creation.

May 13, 2025

Zerve unveiled a multi-agent system engineered specifically for enterprise-grade data and AI development.

May 12, 2025

LambdaTest, a unified agentic AI and cloud engineering platform, has announced its partnership with MacStadium, the industry-leading private Mac cloud provider enabling enterprise macOS workloads, to accelerate its AI-native software testing by leveraging Apple Silicon.

May 12, 2025

Tricentis announced a new capability that injects Tricentis’ AI-driven testing intelligence into SAP’s integrated toolchain, part of RISE with SAP methodology.

May 12, 2025

Zencoder announced the launch of Zen Agents, delivering two innovations that transform AI-assisted development: a platform enabling teams to create and share custom agents organization-wide, and an open-source marketplace for community-contributed agents.

May 08, 2025

AWS announced the preview of the Amazon Q Developer integration in GitHub.

May 08, 2025

The OpenSearch Software Foundation, the vendor-neutral home for the OpenSearch Project, announced the general availability of OpenSearch 3.0.

May 08, 2025

Jozu raised $4 million in seed funding.