Red Hat announced a multi-stage alliance to offer customers a greater choice of operating systems to run on Oracle Cloud Infrastructure (OCI).
Think about your favorite desktop or mobile app for a moment. It probably has a pretty nice user experience. Whatever that software was designed to do, or designed to help you do, it does. And when you want it to! And on whatever device(s) you currently own.
None of that is easy.
The people who create, test, and deliver those favorite, can't-live-without apps do so under immense pressure to deliver the next release even faster and at an even higher quality than the last. SmartBear conducted a survey to learn the methodologies, practices, and tools used by the software testing professionals worldwide who build, validate, and deliver software. And by doing so, we get a glimpse into emerging trends, fading trends of the past, and what the future may hold in software quality across various industries and geographies.
What did we discover? First, being able to keep pace with the increasing rates of release cycles — we saw it with quarterly and yearly release cycles this year — is a continuous challenge.
The major theme that keeps rising to the top is visibility — the need for engineering teams to be able to see the "whole picture" of software quality. Whether it's the root cause of bugs, the root cause of potential bugs, performance issues in pre-production before they become performance issues in post-production — increased visibility is needed throughout the software development lifecycle (SDLC). Having visibility and insight into the data gathered throughout the SDLC allows teams to retain quality while improving efficiency and increasing velocity — not sacrificing one for the other. Development teams have traditionally looked at post-production data to determine how well their applications are performing. However, this is increasingly becoming only part of the required story teams need to be able to tell.
The pace of business, meaning, business demands from both inside and outside of an organization, is simply too fast to be able to wait for post-production data to determine how well applications are performing. With more proactive steps, teams can utilize pre-production data to maintain and scale quality. This shift-left approach to data science and data storytelling gives developers and testers the confidence to increase release/deployment velocity. That's the big picture.
Test automation coverage rebounded after a dip last year. Overall, we have seen a shift in the bell curve to engineering teams using more automation with more than half of respondents in the 25-75% automation range and 42% of respondents reporting that more than half of their tests are automated. The biggest challenge to test automation has consistently remained a high frequency of application changes. Respondents also answered that a lack of time is the biggest challenge to their overall testing initiatives.
Half of respondents reported spending more than 70% of their week testing, and three quarters said they spend over 50%. Despite not having enough time to test, nearly two-thirds reported they were satisfied or very satisfied with their testing processes. Those who spend the least amount of time testing reported being the least satisfied with their testing processes.
Web apps and APIs continue to be heavily tested with mobile app testing also continuing upward. The three testing practices that respondents said they perform most were manual testing (78%), regression testing (66%), and automated testing (62%). Those practices were followed by end-to-end, integration, and exploratory testing.
For APIs, 41% reported their top concern is functionality, which comes as no surprise. This year, security (24%) is the second-highest concern, while availability (22%) and performance (13%) dropped to the bottom.
The trends in which roles are writing, performing, and managing software testing, whether it's being done by developers, testers, or a hybrid approach (just don't leave it to be done exclusively by your users!) continues to shift, but its critical importance is being more greatly understood. Whether an organization is looking to boost software quality, security, user experience, accessibility, or brand loyalty, a robust, holistic approach to software testing is mandatory, and not to be sacrificed for "speed."
Increased visibility of the impacts and potential impacts of code changes, test results, and load is becoming the path forward for organizations looking to create a true competitive advantage in any industry. The product a company provides to their customers will continue to take the lion's share of credit for such an advantage but the real innovators are unlocking advantages at every stage of the SDLC, long before that product ever makes it out the door.
Methodology: Smartbear gathered input on a variety of topics around software testing, including development and delivery models, testing and quality assurance challenges and practices, test management and performance testing trends, and API and UI testing tools and techniques. More than 1,500 manual testers, automation engineers, developers, consultants, QA managers, and analysts responded via a 61-question survey over the course of five weeks. This year's survey was well balanced with respondents from North America, Asia, and Europe.