Check Point® Software Technologies Ltd. announced its position as a leader in The Forrester Wave™: Enterprise Firewalls, Q4 2024 report.
DevOps teams are pressured to have their builds running smoothly … but we all know that's hard to achieve. CI optimization is a constant battle against expectations and resources. With the number of automated tests that run in the pipeline, to flaky tests … to simply not enough CPU, DevOps teams are rarely 100% satisfied with their build process.
A build that used to take 10 minutes to finish, once you add automated tests into the picture, suddenly takes 45 minutes to complete. And if failed tests are set to "retry" — that 45-min build can easily double in time!
So enters AI to save the day! We can't get away with a blog post without mentioning AI these days, can we? Apparently not. Because it really does help DevOps teams with their CI optimization.
Pressure to keep the pipeline moving is fierce, and so keeping workflows running optimally is always top of mind for any Devops leader. Although there seems to be an abundant amount of tooling circling around the market, few exist that help optimize workflows to achieve a streamlined build process.
Here are the areas we see AI helping DevOps leaders with CI optimization.
How to Optimize a Pipeline Running a lot of Automated Tests
When automated tests come into the pipeline, the historic answer has always been: add CPU and run those tests in parallel, that should fix the problem. Before the team knows it, DevOps is reciting the famous words from Scotty in Star Trek "Captain, I'm giving her all she's got!"
Parallel threads clamp down on test run times initially, however, teams easily hit a maximum thread count where it's either cost prohibitive to add more CPU or that parallelization is maxed out and no longer reduces build times.
What's the next option?
This is where AI has come fully into picture. There are promising developments in the AI risk-based testing space that are helping DevOps teams with their CI optimization efforts. Leveraging AI, DevOps can set their CI pipeline to auto-select and execute only the automated tests impacted by recent developer changes on a per Commit basis.
Rather than running 100% of the testsuite in maximum parallel threads, the CI Pipeline executes a smart subset, such as 20% of tests, while still catching the regressions. Therefore, the CI Pipeline doesn't require as many resources to find regressions and can do so in a much more fast and efficient way.
Example for CI Optimization
An E2E testsuite with 1,000 tests running through a CI pipeline in 10 parallel threads takes 60 minutes to complete. 60 minutes is a long time. So how can they get 60 minutes down farther without throwing more hardware at the issue?
Inserting an AI risk-based testing tool into the pipeline would allow it to auto-select and execute only the tests relevant to developer changes, and thereby reduce the number of tests needing to be executed.
In this example, if Smart Test Selection is set at 10%, it would prioritize the top 100 E2E tests for execution based on the recent developer commits. The execution time would drop from 60 minutes down to approximately 6 minutes to complete.
By adding an AI risk-based testing tool into the CI pipeline, the team can now run against the full 1,000 tests while AI picks the top 100 tests given a recent change on a per PR basis to save time and resources. A worthy option to consider over just throwing more hardware at the issue.
How to Optimize a Pipeline with Flaky Tests
Flaky tests are the bane of existence for both DevOps and QA teams alike. These tests are a constant distraction, from breaking builds to chasing false positives. A test that fails due to no underlying code changes is simply a test that will cause someone to waste time.
There are a few avenues DevOps leaders can pursue to prevent flaky test distraction, such as retries. For example, if a test fails, retry it 3 to 5 times again to ensure it fails consistently each time to warrant raising a real defect.
The downside to this methodology, once again, it prolongs the CI build process!
A 10-min build with 5-min worth of automated tests and a high degree of flakiness can be extended to 25-min in some cases. Retries are good to help determine flakiness, but there is a tradeoff, which is time and resources.
What's the next option?
Giving clean feedback to developers is hard. AI can do this and block out flaky distraction. AI can recognize trends in test failures that pick up whether a test failed due to a real defect or not.
AI-models can allow DevOps teams to only fail CI builds on real failures, and avoid the flaky noise inherent in so many testsuites. If there's a degree of flakiness in every run, the builds can Pass/Fail only on real defects, thereby giving clean signals to Developers.
How does AI know to prevent Flaky Tests?
The AI trains from historical data, learning about the tests and defects, and builds out an AI Model that can determine when a test failed on a real defect or is a false positive.
For example
A UI testsuite with 1,000 tests has 5% inherent flakiness. After every run, the team must review 50 tests failures that are flaky — a major distraction that may cause developers to ignore the results (which is a very bad thing).
When the AI Model is trained on this testsuite, if there are no real defects introduced, the build, even with 5% flakiness, can be set to Pass. Only on which the AI Model determines to be a Real Defect, will the Build Fail.
Tooling such as this, which is highly configurable to team preference, can be leveraged to save team distraction, keep builds running smoothly, and optimize CI pipelines.
Conclusion
The advent of CI pipelines has accelerated team output to deliver faster than ever, however, as more weight is put on these pipelines, optimization strategies need to be put in place to maintain speed and efficiency.
Automated Tests and Flaky Tests can burden the CI Build process, however, new AI technologies, such as an AI-Powered Risk Based Testing tool, exist to help counteract these CI bottlenecks to keep workflows streamlined and efficient.
Industry News
Sonar announced two new product capabilities for today’s AI-driven software development ecosystem.
Redgate announced a wide range of product updates supporting multiple database management systems (DBMS) across its entire portfolio, designed to support IT professionals grappling with today’s complex database landscape.
Elastic announced support for Google Cloud’s Vertex AI platform in the Elasticsearch Open Inference API and Playground.
SmartBear has integrated the load testing engine of LoadNinja into its automated testing tool, TestComplete.
Check Point® Software Technologies Ltd. announced the completion of its acquisition of Cyberint Technologies Ltd., a highly innovative provider of external risk management solutions.
Lucid Software announced a robust set of new capabilities aimed at elevating agile workflows for both team-level and program-level planning.
Perforce Software announced the Hadoop Service Bundle, a new professional services and support offering from OpenLogic by Perforce.
CyberArk announced the successful completion of its acquisition of Venafi, a provider of machine identity management, from Thoma Bravo.
Inflectra announced the launch of its AI-powered SpiraApps.
The former Synopsys Software Integrity Group has rebranded as Black Duck® Software, a newly independent application security company.
Check Point® Software Technologies Ltd. announced that it has been recognized as a Visionary in the 2024 Gartner® Magic Quadrant™ for Endpoint Protection Platforms.
Harness expanded its strategic partnership with Google Cloud, focusing on new integrations leveraging generative AI technologies.
OKX announced the launch of OKX OS, an onchain infrastructure suite.