SonarSource added over 5,000 customers in the last 12 months, reaching the 15,000 commercial customers milestone in record time.
DevOps is not really about the tools. DevOps is about people and processes as much as – if not more than – tools. Without cultural and process changes, technology alone cannot enable DevOps success. Several of the top experts in the DevOps arena made this very clear while DEVOPSdigest was compiling this list. That being said, a variety of technologies can be critical to supporting the people and processes that drive DevOps.
To develop this list, DEVOPSdigest asked experts from across the industry for their recommendation on a key technology required for DevOps. According to the many experts who have contributed their opinions to this massive 5-part list, the DevOps toolkit includes a wide range of both traditional and cutting edge technologies. The purpose of this list is not to finalize a technology checklist for DevOps, but rather to explore how many different types of tools can impact, and enable, your DevOps initiative.
Looking at the many ways experts define DevOps, it is no surprise that many of the technologies on the list of must-have DevOps tools are designed to support those definitive aspects of DevOps: collaboration, breaking down silos, bringing Dev and Ops together, agile development, continuous delivery and automation, to name a few.
Part 1 of the list covers performance management, monitoring and analytics.
1. APPLICATION PERFORMANCE MANAGEMENT (APM)
There are clearly so many tools vital to DevOps advancement, but Application Performance Management is the one that stands out today as it has become so highly ingrained as the primary vehicle by which practitioners aggregate and share critical data. APM has a tremendous halo effect on the maturation of DevOps in general, serving as the de facto measuring stick for applications and process improvement, as well as a practical sounding board for experimentation. At the end of the day, organizations are employing a wide range of metrics to gauge various aspects of DevOps progress, but APM tools supply the most critical view – how this work is translating directly into end user interactions.
VP, DevOps Product and Solutions Marketing, CA Technologies
Using an Application Performance Management (APM) tool in a consistent manner to cover all environments across the SDLC (i.e. Dev, Test, QA, and Prod) will help facilitate an amplified feedback loop for application delivery. APM has the potential to lay the foundation for SHIFTING your development timeline LEFT to improve time to market, fostering smoother code deployments and minimizing anomalies in production.
Director of Customer Experience Management at the Auto Club Group and Founder of the APM Strategies Group on LinkedIn.
Enterprise production-focused Application Performance Monitoring (APM) products are essential for giving IT dev, ops and business teams, real-time visibility into how applications are performing and supporting the business. APM is essential to the DevOps feedback cycle, allowing IT operations to uncover information, such as capacity, application usage etc., so that architects and developers can design and build better quality applications. On top of this, a good APM solution should promote collaboration between business, IT dev and ops teams, especially during emerging app issues, so that business impact can be avoided.
Director of Technology Strategy, AppDynamics
From all must-have tools, the tool that’s forgotten the most, APM is the one that allows developers to have insight into the behavior of their code in production and give them the ability to detect anomalies and defects and fix them as soon as possible. That’s the tool that brings the most bang for your DevOps bucks. DevOps without real Ops is not sufficient. Furthermore, if Dev and Ops share the same operational data, that will reduce finger-pointing and enable faster and more effective troubleshooting, resulting in a better user experience for your customers.
Senior Principal Product Marketing Director, Oracle
A DevOps culture has a lot to do with trust and transparency using actionable metrics throughout the entire application lifecycle. At a minimum, an enterprise needs a core understanding of user interactions, transactions and overall digital performance. Core APM tools can give you the ability to synthetically and natively exercise performance while proactively uncovering problems that ensure users have the optimal experience and remain engaged. In an ideal world, an enterprise should have a comprehensive yet consistent suite of tools that include complete user monitoring, allowing the enterprise to focus on quality metrics and also identify performance issues as early in the application lifecycle as possible.”
Global DevOps Practice Lead, Dynatrace
While DevOps is most often associated with automation and continuous delivery/integration tools, I believe the single most important tool that organizations need to properly adopt and use to make a transformation to DevOps is a monitoring system. You cannot improve what you can't measure. Implementing key metrics across the business to help recognize areas that are in most need of improvement is the key to identifying the bottlenecks that prevent DevOps adoption. If the metrics show that certain workflows are inefficient because of bloated processes or interaction between multiple groups, then those workflows need to be reviewed and changed. By having insight into software development, deployment pipelines, and business process efficiency provides a complete picture of areas in need improvement. Once the problematic areas are identified, other tools can be plugged in where needed to improve and streamline the delivery pipeline.
Hands down, one of if not the most important tool for DevOps success is end-to-end monitoring with automation. The DevOps process requires everything to be monitored and much of what that monitoring entails will need to be automated. Visibility across the application stack and into everything that drives performance is critical for the speed and collaboration that is the primary goal of a DevOps strategy. The impact of every change should be known. And to move faster, alerts, remediation and more should be automated.
VP, Product Marketing and Strategy, SolarWinds
Monitoring tools that can integrate easily into your stack are critical for enabling DevOps. With microservices architecture, there are hundreds, if not thousands, of pieces to the DevOps puzzle that only become more complicated if you can't quickly and easily get visibility into the health of those services. Monitoring every layer of your infrastructure is critical, but in order to reduce complexity, those monitoring tools must be able to work together to show the bigger picture — from your servers to your API endpoints — while still allowing you to isolate problems down to the microscopic level.
Marketing Lead, Runscope
DevOps exists in order for IT to be more responsive to the requirements of the business. The business wants an infinite number of enhancements implemented every minute. Your entire tool chain must therefore operate at the clock rate of your DevOps initiative. This places a new and special burden upon your DevOps monitoring tools. Many great new monitoring tools have been created to address the new requirements of Agile Development, DevOps, Containerized Micro-Services and highly distributed applications. New monitoring tools exist at the application performance layer, the virtualized network layer, the software defined infrastructure layer, and the virtualized storage layer. New monitoring tools exist across data types that are collected with some focusing upon metrics and others focusing upon logs. This explosion of new requirements and the explosion of new monitoring tools to meet these requirements leads to the need to integrate these streams of data into forms easily consumable and useful to IT Operations and other constituencies.
The important thing to remember about DevOps projects is that they end and are turned over to IT operations. Make this turnover fast and easy by planning for integration with data center monitoring from the beginning. Your most important tool is the one that lets you move onto the next project!
Alliance Strategist, Zenoss
3. END USER EXPERIENCE MONITORING
The parts of DevOps which turn the tide around and start exposing data from production to developers are also increasingly deployed, but the processes around these are not. For example, tools that enable exposure to the actual end user experience in production would need to become more transparent for the engineering departments instead of just operations. Even more so, many of such tools provide value to the business side as well, so a successful deployment in the user experience monitoring domain would satisfy even more stakeholders.
Co-founder and Head of Product, Plumbr
4. SYNTHETIC MONITORING
DevOps implies that you need to communicate between Ops and Dev in a good way. Using application/API driven synthetic monitoring will always give you the yardstick to measure your success.
Founder and CEO, Apica
The DevOps toolbox is absolutely jam-packed, but one tool that cannot be overlooked is synthetic performance monitoring – as a complement to real user measurement (RUM). Going beyond providing a view of the user experience, performance monitoring tools must also be able to exactly pinpoint the source of bottlenecks – ideally before they impact a large number of users. This gives DevOps teams the opportunity to find and fix problems accurately and expeditiously, both before and during production. Given increased IT complexity both within the data center and across the Internet, finding the source of performance problems – whether for internal enterprise applications, or customer-facing web applications – has the potential to grow harder and more time-consuming. Synthetic performance monitoring data's ability to swiftly and accurately identify problem sources before they affect the digital user experience is the only way to reconcile two competing demands – growing user performance expectations, and faster and more frequent software roll-outs.
Director of Industry Innovation, Catchpoint
5. INFRASTRUCTURE MANAGEMENT
If you are stranded on a desert island (but with a strong and reliable Internet connection) you still need to ensure your infrastructure is performing and your users are happy with their experience. What’s needed is a solid and extensible Digital Infrastructure Management Platform that can collect data from every layer of your stack, analyze what’s normal, what’s not, and visualize the impact of anomalous behavior. This will allow you to catch issues that can affect your operations before they truly impact your business.
Co-Founder and CTO, SevOne
Traditional operational tools for data centers are generally geared towards configuration management and monitoring, but they offer no visibility into encapsulated traffic for Infrastructure-as-a-Service clouds. From our own experience in DevOps, and by working with operators ourselves, we’ve seen firsthand the unmet need for analytic and end-to-end operational tools for network management. End-to-end operational tools facilitating provisioning and orchestration offer key enablers for organizations looking for DevOps transformation from traditional IT service management.
VP of Business, Midokura
6. INCIDENT MANAGEMENT
Organizations must understand that tools are only one part of the answer. They must have the people, processes, and tools in place in order to successfully implement a DevOps environment. There are a number of helpful tools in the DevOps ecosystem. You want to think along the lines of productivity, repeatability, and safety when considering tools best suited to facilitate a DevOps mindset. In the end, you want there to be direct paths in place from an engineer to any given environment for delivery of code, issue resolution (triage, notify, fix, and learn), and maintenance, and one way to do this is with streamlined incident management solutions. Being in position to detect quickly and fix quickly is also key to having a successful DevOps environment in your organization.
VP of Engineering, PagerDuty
DevOps needs tools that go beyond continuous release and deploy. They need tools that provide continuous analytics in order to measure and analyze application activities against business objectives. While the focus is often on continuous release and deploy, that is not always possible in some firms due to regulatory concerns. However, the need is there for continuous monitoring, tracking and analytics. First, use monitoring to gather end-user experience data as well as infrastructure and application data. Then, track and stitch transactions together to show a timeline of what happened. Finally, create shared metrics that enable the analysis to be compared to both technical and business objectives.
VP Product Management and Marketing, Nastel Technologies
Application-centric analytics - Recent developments by many leading APM providers have focused on how application performance, usage, and business data can be correlated to analyze whether applications are driving desired commercial outcomes. This form of application-centric analytics is critical to DevOps as there is no point in delivering new updates or features at speed if they are not providing value to users or the business. Application analytics allows DevOps professionals and the business to understand quickly how to tailor applications, in order to optimize user experience and overall application quality
Director of Technology Strategy, AppDynamics
In any company running DevOps the critical tool is the data analytics platform - a central place where the most important machine data is stored, analyzed and presented. Combining multiple data sources from servers, devices and other DevOps tools is crucial for an ever changing world. Being able to act on insights will determine the win or loss for every company.
Online Performance Consultant and Founder of Blue Factory Internet
Historically the focus for DevOps has been on deployment automation - pushing a change rapidly into production. But what happens if the change, automatic or manual causes undesired impact? You can't always just roll back the change if it's incorrect. Today's IT Operations Analytics (ITOA) tools automatically analyze all actual changes and their impact across the entire IT environment together with release and deployment data for key operational insights. ITOA technologies help to predict early stability issues caused by the change and link changes to incidents for root cause analysis when incidents do happen. I believe that we will continue to see further expansion of ITOA and its integration into DevOps platforms. This will enable DevOps to implement truly agile, rapid and stable processes automated end-to-end.
As more and more companies embrace the method of constantly developing, releasing and updating software, there becomes an even greater opportunity for errors to occur. If your team is pushing out several software deployments a day, regardless of how good your testing and quality control is, it can quickly become impossible to know exactly how well everything works together. The first release may not work perfectly anymore when combined in an environment with the sixth release. With tight schedules, limited staff and limited budgets, it is important to embrace machine learning-based analytics as a solution to quickly finding any operational errors and bringing them to the team’s attention so they can be repaired. Machine learning analytics can do what humans cannot – namely, monitor all operational metrics in near real-time and look for anomalies that indicate a current or impending problem. By continuously learning your unique environment, including what constitutes normal and what does not – machine learning-based systems use that information to be even smarter about detecting future anomalies and problems, helping to make for a smoother and more successful DevOps process.
VP of Products, Prelert
Business objectives and the usefulness of DevOps technologies change throughout the application lifecycle. During the production phase, the most important objective is business assurance and the most effective technology to accomplish this is through traffic-based analytics. Assuring the delivery of business services requires data coherence by continuously converting large volumes of traffic-based data into structured metadata which is optimized for real-time analytics platforms. The generated metadata delivers actionable insights for business agility, mitigating risk, and service assurance. Traffic-based intelligence is the foundation for a solution that effectively pinpoints the root-cause of performance problems, reduces the Mean-Time-To-Knowledge (MTTK) by 80% or more, and substantially eliminates OpEx by proactively monitoring and managing the entire service delivery chain in a cost-effective manner.
Senior Enterprise Solutions Marketing Manager, NetScout
8. MANAGER OF MANAGERS
The DevOps agile development model extends to its tools, and we've seen a huge proliferation of tools introduced to improve some aspect of monitoring. While each tool solves a specific problem, the proliferation has inadvertently fostered silos of expertise, domain-specific views and massive data volumes generated in various formats. As application count and architectural complexity increases, the must-have tool to scale production support is an analytics-driven Manager of Managers (MoM). It has to ingest all of this operational event data (application to infrastructure) and apply machine learning to automate the noise reduction and alert correlation. This gives DevOps teams earlier warning of unfolding issues, better collaboration, visibility into root cause – ultimately reducing the impact of production outages and incidents.