Check Point® Software Technologies Ltd. and Illumio, the breach containment company, announced a strategic partnership to help organizations strengthen security and advance their Zero Trust posture.
While mobile is quickly becoming the de-facto market platform for many of the business-critical applications deployed by banks, insurance companies and other enterprise organizations, the need to ensure an optimal end user experience mandates a robust mobile performance testing environment.
Accordingly, mobile enterprises require an environment which can provide insights into the key mobile application performance indicators - such as response time, availability and business critical transactions - on various devices (and operating systems) being used across different networks and carriers.
Let's begin by taking a look at the current situation, which clearly underscores the biggest challenge for enterprises in terms of ensuring successful mobile business apps.
According to recent surveys, it seems that end users are very conscious of application performance – in some cases even more so than functionality. The gaps in performance on different networks and locations are evident: the need to performance-test the application before launch date, on real devices, and provide sufficient insight to optimize it.
Based on our experience with mobile enterprises, building an efficient mobile performance test strategy should consist (at a minimum) of the following five pillars:
1. Defining the supported devices and operating systems
Mobile devices have a significant impact on application performance. Smartphones and tablets are, in essence, small computing devices and deliver powerful capabilities. On the other hand, however, they are highly constrained in terms of resources. The problem is that end users expect and demand the same level of performance (if not better) from their mobile apps as that which they get on their desktop computer. Therefore, selecting the right mix of mobile devices to test on prior to launch is one of the basic criteria for effective performance testing.
In our findings, different devices (and even the same device with a different OS version) may have a significantly different response to degraded network conditions or to server load.
Note that in order to stay in sync with your user community, the list of devices supported by your application should be dynamic and change in response to market trends (new devices, OS versions, etc.). This list should be updated on a quarterly basis, and your testing plan should take this into account.
2. Selecting the key business transactions
Select the functions of your application that users care about the most, and focus on testing against them using realistic and clear KPIs. In the initial testing stage, it is recommended to isolate your test environment and see how these business transactions work in a “clean room” environment. (Subsequently, these scenarios should also be tested in a production-like environment with interrupts such as incoming calls, messages, etc.).
3. Simulating various network conditions
Depending on the device and OS, mobile applications may behave differently, depending on the type of carrier network infrastructure/technology (3G, 4G, WiFi). According to Shunra, an authority in network virtualization for software testing, “Production network conditions such as inconsistent bandwidth, high jitter, increased latency and packet loss all work to degrade application performance."
Thus, it is imperative to analyze the impact of network conditions in your mobile performance testing scenarios. Your Performance team should simulate such network conditions, measure the user-facing KPIs (see above), and generate a network traffic sniffer (PCAP), which can serve as input to the optimization analysis phase as well as a baseline for the server load test script.
4. Building server load
When presented with material load, servers begin to perform in a way that may impact the end user experience. For instance, packets sent to the mobile device may be delayed, lost, de-sequenced, etc. The server load test measures the mobile end user experience on a real device while applying material load on the server farm.
The measurements on real devices are done simply by running recurring tests on the real devices repeatedly throughout the test session and measuring the KPIs users care about. You can make the synthetic traffic load “mobile-relevant” by converting the network traffic sniffer PCAP file into a load script. Make sure you are able to isolate device issues from network issues and application functional defects, which may be specific to a device or mobile OS.
5. Analyzing, debugging and optimizing
The output of the performance cycle based on the steps above would typically be a detailed report showing the key statistics (response time, availability and other) per transaction and per real mobile device under different conditions. Such a report should highlight and present the bottlenecks in terms of network traffic, detailed device vitals, etc. Your Performance team needs to analyze and triage these reports in order to pinpoint the root cause of any performance issues (which may be network-related problems or issues related to the device itself).
Bottom Line
As the mobile market continues to grow, new operating systems and a growing number of vendors are joining this rapidly expanding ecosystem. In the world of enterprise mobility, the importance of mobile performance testing continues to increase and has necessarily become a key part of the software development life cycle. By employing the right testing strategy and tools, and enabling access to the widest variety of real devices and simulated networks, your organization can meet the challenges of mobile application performance, assure end-user satisfaction and optimize business results.
Industry News
Harness launched its Cloud Web Application and API Protection (WAAP).
Solo.io announced Agent Gateway, an open source data plane optimized for agentic AI connectivity in any environment.
Opsera and Lineaje announced a strategic partnership to transform how enterprises secure and remediate open source and containerized software autonomously and at scale.
Kubernetes 1.33 was released today.
Docker announced a major expansion of its AI initiative with the upcoming Docker MCP Catalog and Docker MCP Toolkit.
Perforce Software announced the release of its latest platform update for Puppet Enterprise Advanced, designed to streamline DevSecOps practices and fortify enterprise security postures.
Azul announced JVM Inventory, a new feature of Azul Intelligence Cloud designed to address the complexity and risk of migrating off Oracle Java.
LaunchDarkly announced the acquisition of Highlight, a powerful, open source, full-stack application monitoring platform known for its error monitoring, logging, distributed tracing and session replay capabilities.
O’Reilly announced AI Codecon—a groundbreaking virtual conference series dedicated to exploring the rapidly evolving world of AI-assisted software development.
Veracode unveiled new capabilities offering proactive risk mitigation and automated security at enterprise scale.
Snyk launched Snyk API & Web, delivering a dynamic application security testing (DAST) solution designed to meet the growing demands of modern and increasingly AI-powered software development.
Check Point® Software Technologies Ltd. announced that it has ranked as a Leader and the only Outperformer for its Check Point Quantum Security Solutions in GigaOm’s latest Radar for Enterprise Firewall report.
Postman announced new releases designed to help organizations build APIs faster, more securely, and with less friction.
SnapLogic announced AgentCreator 3.0, an evolution in agentic AI technology that eliminates the complexity of enterprise AI adoption.