Datadog announced an integration with Nessus from Tenable.
APMdigest asked experts from across Application Performance Management (APM) and related markets for their recommendations on the best ways to ensure application performance before app rollout. The second set of six recommendations includes testing and analytics.
7. PERFORMANCE TESTING EARLY IN THE DEVELOPMENT LIFECYCLE
The best way to minimize the chances of performance defects creeping into production is to implement a comprehensive performance assurance strategy across IT. Performance risk assess every new project and change request as early as you can in the application lifecycle. Make performance testing a mandatory quality gate for all releases.
Head of Performance, Intechnica
Ensure app performance before rollout by enabling continuous user testing, performance testing and load testing as early in the development cycle as possible. Doing this means getting the most basic end-to-end functionality of any app up and running as quickly as possible even if its against mock back-end services. This allows the business, testers, developers and operations to see the whole and avoid performance surprises by interacting with a working 3D model of the product while working towards the "minimum likable product" or first release.
Chief Blogger and Analyst, APMexaminer.com
Too often performance monitoring is added as an afterthought instead of being baked into the application during the deployment process. Then, when problems arise they must be resolved without reference to performance baselines using structured data for infrastructure dependencies. To ensure top performance, dont wait to add performance monitoring!
Director of Marketing, GroundWork
8. LOAD TESTING
Sr. Director, Product Marketing, New Relic
Undoubtedly the best process to ensure peak performance in production is to load test the application with production like load in QA environment. This is actually easier said than done, primarily because QA and Prod environments are very different in most organizations in terms of server resources, amount of data and perhaps network configuration. But if you don't want any surprises in production, it makes sense to thoroughly vet the application in a lower environment. A reliable load testing tool and an APM tool are must. Be meticulous in recording and reporting performance metrics.
Application Support Expert, www.karunsubramanian.com
Establish a test environment to simulate user transactions with user loads for capacity and stress testing. Trace transactions to identify real-time performance bottlenecks via dynamic code instrumentation for library, method, or SLQ invocation. And correlate application degradation and failures with the infrastructure (network or storage) to determine root causes.
VP Product Management, ManageEngine
Ensure that the appropriate tooling is in place, ideally in Dev, Test and Production, to provide clear visibility and rapid triage of application performance under load.
Head of Performance, Intechnica
9. DATABASE TESTING
Everyone understands the importance of optimizing application performance. But if you ignore the performance of the database that drives your application, your end user's experience will suffer. Setting performance baselines for your database, and monitoring them as you roll out your application, is absolutely essential. These may include running production system stress tests to ensure your database can handle the new data loads, setting thresholds to avoid inefficient or poorly performing queries, and tracking real user response times to ensure a consistent user experience through-out the roll-out process.
VP of Product Strategy, Idera
10. CLOUD TESTING
Organizations adopting SaaS apps like Office 365 or Google Apps often dont realize that their internet connectivity isnt up to the increased traffic. This can totally derail their migration to the cloud. To avoid this, teams need to thoroughly test cloud app availability and performance from each of their locations before, during, and after roll-out begins, so they can detect and correct configuration and bandwidth issues.
VP Product Management and Marketing, Exoprise
11. IT OPERATIONS ANALYTICS (ITOA)
Ensuring optimum application performance requires implementing a tool that provides real-time, automated data collection with deep analytics insights that allow for swift remediation. Prior to deployment, this kind of performance analytics solution can also be valuable in forecasting future capacity demands, and serve as the single source of truth by which DevOps and IT administrators collaborate more closely to ensure lean operational team processes are factored into new product rollout plans and designs.
VP of Marketing, Xangati
Between new projects and updates to existing ones, DevOps teams can deploy up to 20 applications each day! To ensure optimal performance, rolling out this code requires testing for glitches, which can be tedious and time consuming. Automating this process with machine learning-powered anomaly detection software allows DevOps teams to identify issues in real-time before the apps go live. This eliminates the need to write numerous rules and set several thresholds in the hope of anticipating all potential problems, not only making the process faster, but more accurate. Because the technology flags all anomalous behavior, you can find and fix issues you didn't even think to look for – not just those related to a pre-determined set of KPIs.
VP of Marketing, Prelert
There have been many approaches that have sought to ensure application performance, like Performance Testing and Capacity Management. However, despite having automated tools for streamlining the release management effort, errors can still slip into the process. The fact that you first release to a test environment, then to production, won't ensure that these releases are consistent, since there are many different configurations and dependencies specific to each environment. Release Validation based on IT Operations Analytics is an essential step to helping ensure application performance. Analyzing consistency across production and pre-production environments, you can make certain that your planning and testing efforts are based on the right configuration. Verifying what was already checked and updated, you can identify any changes that were not implemented through the automated deployment tool once a change is certified in pre-production.
Dynamic virtualized environments are already straining the capabilities of legacy monitoring tools, and container technology, while speeding deployment of new code, will also increase time spent troubleshooting exponentially. So what can DevOps teams do to prevent glitches from making it into production? Incident management tools, used to monitor events and alerts across the stack in production, continue be extended into the QA environment to help catch change-related snafus before they affect the service or application in production. These tools, employing autonomic, machine learning and advanced analytics, are well suited to the pre-production phase where “black swan” incidents are most common. They can identify anomalous activity without reliance on burdensome rules or models, and are ideally suited to the dynamic nature of container technology. As IT Operations Analytics (ITOA) tools become more sophisticated, this will prove critical to predict problems during the QA stage, saving considerable time and resources. And with it, the promise and scale of container technology will be fulfilled.
Chairman, CEO and Co-Founder, Moogsoft
12. LOG ANALYSIS
How do you ensure optimum application performance before the app goes live? Try creating and monitoring application logs to troubleshoot pre-production issues.
VP Product Management, ManageEngine
Offering analytics on log data before day 1 of your application ship date is critical to determine what components are contributing to an issue and allows you to react quickly.
Product Manager, Advanced Networking, SevOne