The Worst 4 Habits in Software Testing Today
Stop testing like it's 1985!
December 04, 2017

Paola Moretto
Nouvola

In this blog I will summarize and share with you some wisdom about the biggest problem – okay, problems – in the field of software testing right now. While this is not an exhaustive list, these four bad habits have emerged as the predominant themes.

Bad habit #1: Not testing enough

Companies simply don't test enough (or, in less mature organizations, don't test at all). This is definitely the biggest problem with testing.

We are consistently stunned to hear things like, "Well, our developers are confident that the code will work well under traffic." Not to undermine anybody's confidence, but without data to back it up, there is really no objective basis to assume that will be true. Nor is there any good reason to assume the unnecessary risk that comes with this kind of "blind faith" in the code.

We loved a quote at StarWest conference from Maaret Pyhäjärvi's keynote: "Testing is not about breaking the code, it's about breaking your illusions about the code."

Your illusions about the code can be about a scenario, usability, or performance under load. It doesn't matter – if you don't test, you won't have enough data to make informed decisions.

Another common excuse for insufficient testing peddled by well-meaning teams is that they "don't have time for testing." In the pressure of pushing features out to meet market demands, they are convinced they are going faster by skipping the testing phase.

This is another illusion. Except it's not about the code itself, but the QA process. There are always testers. If you haven't done enough testing, you're simply relegating that role to your end-users. And that can be a very costly decision, resulting in a swarm of unhappy users, which can too easily translate into painful revenue loss for the company. The other consequences, like increased rollbacks and a general slowdown of your development cycle, are almost inconveniences by comparison.

Bad Habit #2: Not doing enough regression testing

If your last test was a long time ago, you can virtually guarantee there are some new problems that have creeped in since then.

Don't trust your outdated test results. A regression happens when unexpected issues are caused by code changes, or when code changes have unwanted consequences. Regression testing is usually a very good candidate for automation, because tests are stable and repeatable, and you are testing established scenarios. If there are issues, you want to find them before your users do.

If you don't have recent data about your performance, it's almost equivalent to not having data at all.

Bad habit #3: Still testing like it's 1985

So you are deploying on a multi-cloud environment, use CI and build for every commit, github for source control, Agile methodologies, and devs communication, and then you use slow and dinosaur-like tools for testing? It simply doesn't work. Don't let it get to that point. You can't use 80s technology for 3rd millennium development.

Bad habit #4: Too little / too much automation

This is less obvious and probably a bit controversial. First, continuous integration and continuous delivery can't really happen without continuous testing.

One of the pillars of continuous testing is automation, which enables you to get to the right velocity if DevOps / CD is the goal. Automation must be a priority. Not doing enough to automate these workflows is a well-known problem in the industry that delays full DevOps adoption.

However, we also see the opposite problem. People think they have solved everything with automation. In reality, however, the "automate it all" approach tends to fall short because it relies on your ability to predict all user scenarios with 100 percent accuracy. Given the increasing complexity of applications and the technological contexts in which they are being used, that level of predictability is getting harder and harder to achieve.

So it usually makes sense to leave space for exploratory testing, heuristic testing and for introducing a creative/intuitive approach to find out what's annoying for your users. Exploratory testing, as in testing that is not normed or scripted in advance, is a great technique and is essentially the art of inventing test cases in real-time. Both automation and exploratory testing are valid approaches, and are not mutually exclusive.

It's absolutely essential for modern development teams to put testing front and center in their priority list, and to adopt best practices and tools that help them accelerate their development process. Be sure to make your software ready for the real-world.

Paola Moretto is Co-Founder and CEO of Nouvola

The Latest

August 15, 2018

Microservices are a hot topic in IT circles these days. The idea of a modular approach to system building – where you have numerous, smaller software services that talk to each other instead of monolithic components – has many benefits ...

August 13, 2018

Agile is expanding within the enterprise. Agile adoption is growing within organizations, both more broadly and deeply, according to the 12th annual State of Agile report from CollabNet VersionOne. A higher percentage of respondents this year report that "all or almost all" of their teams are agile, and that agile principles and practices are being adopted at higher levels in the organization ...

August 09, 2018

For the past 13 years, the Ponemon Institute has examined the cost associated with data breaches of less than 100,000 records, finding that the costs have steadily risen over the course of the study. The average cost of a data breach was $3.86 million in the 2018 study, compared to $3.50 million in 2014 – representing nearly 10 percent net increase over the past 5 years of the study ...

August 08, 2018

Hidden costs in data breaches – such as lost business, negative impact on reputation and employee time spent on recovery – are difficult and expensive to manage, according to the 2018 Cost of a Data Breach Study, sponsored by IBM Security and conducted by Ponemon Institute. The study found that the average cost of a data breach globally is $3.86 million ...

August 06, 2018

The previous chapter in this WhiteHat Security series discussed dependencies as the second step of the Twelve-Factor App. This next chapter examines the security component of step three of the Twelve-Factor methodology — storing configurations within the environment.

August 02, 2018

Results from new Forrester Consulting research reveal the 20 most important Agile and DevOps quality metrics that separate DevOps/Agile experts from their less advanced peers ...

July 31, 2018

Even organizations that understand the importance of cybersecurity in theory often stumble when it comes to marrying security initiatives with their development and operations processes. Most businesses agree that everyone should be responsible for security, but this principle is not being upheld on a day-to-day basis in many organizations. That’s bad news for everyone. Here are some best practices for implementing SecOps ...

July 30, 2018

While the technologies, processes, and cultural shifts of DevOps have improved the ability of software teams to deliver reliable work rapidly and effectively, security has not been a focal point in the transformation of cloud IT infrastructure. SecOps is a methodology that seeks to address this by operationalizing and hardening security throughout the software lifecycle ...

July 26, 2018

Organizations are shifting away from traditional, monolithic architectures, with three-quarters of survey respondents delivering at least some of their applications and more than one-third delivering most of their applications as microservices, according to the State of DevOps Observability Report from Scalyr ...

July 24, 2018

What top considerations must companies make to ensure – or at least help improve – Agile at scale? The following are key techniques and practices to help accelerate Agile delivery rollouts and scale Agile and DevOps in the Enterprise ...

Share this