The Linux Foundation, the nonprofit organization enabling mass innovation through open source, announced the launch of the Cybersecurity Skills Framework, a global reference guide that helps organizations identify and address critical cybersecurity competencies across a broad range of IT job families; extending beyond cybersecurity specialists.
The importance of app stability cannot be overstated. Today, more consumers purchase items or services online and more employees use business apps as part of their day-to-day work. While B2C apps offer new ways for customers to engage with brands and help drive growing revenue, B2B apps offer enterprises the opportunity to modernize operations without significant training. In short, apps are more prevalent than ever in both our personal and work lives, so it's critical that organizations deliver an error-free app experience.
While stability is a KPI owned by engineering organizations and gaining ground, it has a significant impact on overall business performance and growth. Simply put, users can't stand it when an app stalls or crashes. One bad experience can lose someone forever. And with social media and app store reviews and ratings, that one bad experience can have a ripple effect that reaches more than just the original users, severely harming a business's bottom line.
To provide engineering teams with hard data on how their apps compare to others in the industry, Bugsnag recently announced the results of its new report, Application Stability Index: Are your Apps Healthy? In order to ensure accurate insights, we drew on a wealth of data, analyzing the performance of approximately 2,500 top mobile and web applications (as defined by session volume) within our customer base. This included data from eCommerce, media and entertainment, financial services, logistics and gaming companies, among other verticals. Hopefully, this data can serve as a benchmark to help engineering teams determine their own application stability SLAs and SLOs and provide guidance about when to build features vs. fix bugs based on the app's current stability.
We viewed the results from the lens of "five nines," the goal infrastructure and operations teams have for 99.999% app uptime and availability. From this vantage point, while the data showed the average mobile and web app have achieved strong stability scores, there's still room for improvement. Here's a deeper look at what we discovered about mobile and web app stability.
Mobile App Stability
The report evaluated apps from several mobile development platforms, including Android, iOS, React Native, and Unity. Stability scores were negatively impacted by session-ending events, which include things like crashes as well as ANRs (Application Not Responding) in Android, React Native, and Unity applications and OOMs (Out of Memory) in iOS applications.
Overall, the median stability score of mobile apps came in at 99.63%. This means that nearly one out of every 250 customers could be having a completely broken experience with a mobile application. Compared to the "five nines" standard, a median stability of 99.63% indicates that engineering organizations have a clear opportunity to commit more resources to measuring and improving app stability and customer experience.
Here's how the four mobile development platforms stacked up:
Android and iOS native applications tend to have a high median stability because there are very specialized developers working on these apps who have the expertise required to understand and address any stability issues effectively. Compared to iOS applications, Android apps tend to have a slightly lower median stability because Android presents a much less constrained development environment. Increased fragmentation of Android devices makes it more difficult to test applications whereas iOS development teams only need to provide a stable experience on a limited number of devices that Apple releases every year.
Web App Stability
The report evaluated five front-end development platforms, including Angular, Backbone, Ember, React, and Vue. Web stability score was determined by unhandled exceptions, such as a bug which prevents the entire page from rendering, an event handler bug which causes the user interaction to fail, an unhandled promise rejection warning, and others.
Overall, web apps had a median stability score of 99.39%, lower than mobile apps. The difference between web and mobile app stability may be driven by the fact that monitoring and addressing client-side issues in JavaScript applications generally requires more effort than doing so in mobile applications. Also, since mobile apps are newer, there's more of an emphasis on managing errors from the get-go, whereas web is an older discipline that had to learn this over time.
Angular, Ember, React, and Vue are modern, opinionated JavaScript frameworks that are built with consideration for error handling. Angular and React were created and sponsored by development teams at Google and Facebook. Engineering organizations working with these platforms have access to the resources and documentation they need to investigate and fix errors that may affect application stability. On the other hand, Backbone is an older and less opinionated web development framework. Dev teams don't have access to the same coding guidelines, best practices, and considerations for error handling that the other more recent development frameworks offer, which may explain the lower median stability and wider range for Backbone apps.
New Features Must be Balanced with App Stability
App stability plays a crucial role in driving broad business outcomes, impacting conversion rates, engagement, loyalty, developer productivity, and competitive advantage. While delivering new features at a steady pace is also extremely important, these features will provide little value if an app is frequently crashing. Organizations must balance the need for new functionality with the need for an error-free experience.
Industry News
CodeRabbit is now available on the Visual Studio Code editor.
The integration brings CodeRabbit’s AI code reviews directly into Cursor, Windsurf, and VS Code at the earliest stages of software development—inside the code editor itself—at no cost to the developers.
Chainguard announced Chainguard Libraries for Python, an index of malware-resistant Python dependencies built securely from source on SLSA L2 infrastructure.
Sysdig announced the donation of Stratoshark, the company’s open source cloud forensics tool, to the Wireshark Foundation.
Pegasystems unveiled Pega Predictable AI™ Agents that give enterprises extraordinary control and visibility as they design and deploy AI-optimized processes.
Kong announced the introduction of the Kong Event Gateway as a part of their unified API platform.
Azul and Moderne announced a technical partnership to help Java development teams identify, remove and refactor unused and dead code to improve productivity and dramatically accelerate modernization initiatives.
Parasoft has added Agentic AI capabilities to SOAtest, featuring API test planning and creation.
Zerve unveiled a multi-agent system engineered specifically for enterprise-grade data and AI development.
LambdaTest, a unified agentic AI and cloud engineering platform, has announced its partnership with MacStadium, the industry-leading private Mac cloud provider enabling enterprise macOS workloads, to accelerate its AI-native software testing by leveraging Apple Silicon.
Tricentis announced a new capability that injects Tricentis’ AI-driven testing intelligence into SAP’s integrated toolchain, part of RISE with SAP methodology.
Zencoder announced the launch of Zen Agents, delivering two innovations that transform AI-assisted development: a platform enabling teams to create and share custom agents organization-wide, and an open-source marketplace for community-contributed agents.
AWS announced the preview of the Amazon Q Developer integration in GitHub.
The OpenSearch Software Foundation, the vendor-neutral home for the OpenSearch Project, announced the general availability of OpenSearch 3.0.