Early Lifecycle Performance Testing and Optimization – Without the Grief
March 15, 2012

Steve Tack
Dynatrace

Today’s consumers have high expectations for exceptional website and web application speed, including during peak traffic periods like the holidays for retailers. A recent survey demonstrates that almost 90 percent of consumers believe it is important for websites and web applications to work well during peak traffic times. When they don’t, these consumers take action quickly: 75 percent who experience poor performance during peak periods go to a competitor’s site, and 86 percent are less likely to return to the website. Worse yet, many consumers flock to social networks where they spread the word on their disappointing web experience to the masses.

The majority of website visitors now expect websites and web applications to load in two seconds or less, and it has been estimated that for each additional two seconds in response time, abandonment rates jump by eight percent incrementally.

With so much riding on performance, you can’t afford to treat your real users as crash test dummies. If you leave performance testing to the final development stages, i.e. pre-production – and for the testers only – you’re in danger of doing just this.

You’ve got to make sure that your end-users are happy and this means building performance considerations into the entire application lifecycle, and conducting testing throughout the development process – not just at the end.

But the thought of adding yet more performance testing cycles into an already overstretched delivery team often elicits the same reaction as the five stages of grief – denial, anger, bargaining, depression then acceptance.

A good leader recognizes that this will be the reaction from their team and works to empower the team members to overcome it as follows:

Denial sets in when team members feel that the risks are not as great as you make out: perhaps they think that operations will be able to tune the servers to optimize performance; perhaps the use of proven third-party technology leads to overconfidence; worst case we think we can use end users as beta testers.

There are too many performance landmines in the application delivery chain to leave this to chance. Bad database calls, too much synchronization, memory leaks, bloated and poorly designed web front-ends, incorrect traffic estimates, poorly provisioned hardware, misconfigured CDNs and load balancers and problematic third-parties force you to take action.

But forcing them to address the situation elicits anger as the team considers the work required to test, and questions like how to test, what tools to test with and where the budget will come from. Teams also start to ask how they get actionable results in the limited amount of time left.

It’s easy to become overwhelmed at this stage and team members feel depressed as it all seems too much to do with the limited amount of time and resources that they have before go-live date.

Acceptance begins when developers make the realization that they simply can’t afford not to build performance considerations into the application lifecycle.

It can be a huge mistake to leave performance validation solely to testing teams at the end of product development. Performance must be an additional, integral requirement for all development, and all new features.

In the ideal world, if we really take testing seriously and if we are willing to take it so seriously that we actually integrate it into the entire application lifecycle, then we are able to make sure we get potentially shippable code with high quality and great performance so we can stay ahead of the competition. It’s something that truly needs to be done, otherwise organizations risk ending up with great ideas and great features which fail due to poor performance.

The good news is testing tools are more affordable and easier to use than ever before. Simple SaaS-based load testing tools now exist with pay-as-you-go models that eliminate costly upfront hardware and software that sits unused between testing cycles. Some solutions now offer developer-friendly diagnostic capabilities that improve collaboration between QA and development, drastically shorten problem resolution time and enable development to build in performance testing approaches earlier in the lifecycle with little to no resource overhead. The ability to layer in these capabilities into a siloed organization provides an incremental approach to building performance into the application lifecycle and gaining acceptance across all performance stakeholders.

Steve Tack is CTO of Compuware’s Application Performance Management Business Unit.

Steve Tack is Chief Technology Officer of Compuware's Application Performance Management (APM) business where he leads the expansion of the company's APM product portfolio and market presence. He is a software and IT services veteran with expertise in application and web performance management, SaaS, cloud computing, end-user experience monitoring and mobile applications. Steve is a frequent speaker at industry conferences and his articles have appeared in a variety of business and technology publications.
Share this

Industry News

June 20, 2024

Oracle announced new application development capabilities to enable developers to rapidly build and deploy applications on Oracle Cloud Infrastructure (OCI).

June 20, 2024

SUSE® announced new capabilities across its Linux, cloud native, and edge portfolio of enterprise infrastructure solutions to help unlock the infinite potential of open source in enterprises.

June 20, 2024

Redgate Software announced the acquisition of DB-Engines, an independent source of objective data in the database management systems market.

June 18, 2024

Parasoft has achieved "Awardable" status through the Chief Digital and Artificial Intelligence Office's (CDAO) Tradewinds Solutions Marketplace.

June 18, 2024

SmartBear launched two innovations that fundamentally change how both API and functional tests are performed, integrating SmartBear HaloAI, trusted AI-driven technology, and marking a significant step forward in the company's AI strategy.

June 18, 2024

Datadog announced the general availability of Datadog App Builder, a low-code development tool that helps teams rapidly create self-service applications and integrate them securely into their monitoring stacks.

June 17, 2024

Netlify announced a new Adobe Experience Manager integration to ease the transition from legacy web architecture to composable architecture.

June 17, 2024

Gearset announced a suite of new features to expand the capabilities of its comprehensive Salesforce DevOps platform.

June 17, 2024

Cequence announced a new partnership with Singularity Tech, an Australia-based professional services company with expertise in APIs and DevOps.

June 13, 2024

Elastic announced a partner integration package with LangChain that will simplify the import of vector database and retrieval capabilities of Elasticsearch into LangChain applications.

June 13, 2024

Fastly announced the launch of Fastly AI Accelerator, the company’s first AI solution designed to create a better experience for developers by helping improve performance and reduce costs across the use of similar prompts for large language models (LLM) apps.

June 13, 2024

Shreds.AI, ant AI capable of generating complex, business-grade software from simple descriptions in record time, announced its formal beta launch.

June 12, 2024

GitLab announced the public beta of expanded integrations with Google Cloud that will help developers work more effectively, quickly, and productively.

June 12, 2024

Pulumi announced Pulumi Copilot, AI for general cloud infrastructure management.

June 12, 2024

Harness completed the acquisition of Split Software, a feature management and experimentation provider, effective June 11, 2024.