Log Analytics is DEAD
December 03, 2015

Albert Mavashev
jKool

Log Analytics is DEAD. Did I really say that?? Yes I did. Log Analytics is a process of investigating logs and hoping to derive actionable information that might be useful to the business. Many log analytics tools are used to gain visibility into web traffic, security, application behavior, etc. But how valuable and practical is log analytics in reality?

One basic precondition for log analytics is that information to be delved into must be in log files and here lies the basic problem:

In order to derive useful analytics from logs one must have proper logging instrumentation and have it enabled everywhere, all the time.

Not only is this approach impractical and very expensive, except in a few limited cases, but it is also burdensome, imposing a significant performance overhead on the systems that produce these logs.

One must log gigabytes and gigabytes of data, store this data and then analyze it in order to detect a problem. I would call this a brute force approach. As most brute force approaches, it is expensive, slow and unwieldy. In many cases log analytics is used to catch occasional errors or exceptions. Do we really need to have all these logs to catch a few outliers?

Log analytics quickly turns into a Big Data problem – store and analyze everything, everywhere, all the time. Is that really needed? Maybe, or maybe not …

Simple Example

You deploy log analytics and it tells you've got 100 errors or exceptions in the past hour. Typically, you will want to investigate this and start with a specific exception.

Your next question would be “is what am I looking and noise or something that requires attention?” Then you will ask “what else happened” and “why?”. There is a series of questions you would ask might include the following:

■ What was my application doing?

■ What was the response time?

■ What was CPU, memory utilization?

■ What were the I/O rates and network utilization?

■ What was Java GC doing?

■ What other abnormal conditions occurred that I should be looking at?

There are so many variables. There are too many to look at and too much to analyze.

What do you do? Unfortunately this is where log analytics stops, you have to jump elsewhere. The path to root-cause becomes lengthy and painful. You may know that there is a problem, but why you have a problem in many cases is not clear.

We have all this data (big data) yet I don’t know what it means or where to look to find meaning. Of course one can say that you can parse out the log entries and extract metrics. Who will write the parsers? Who maintains the rules? Who writes complex regular expressions? What if the required metrics are not in the log files? In most cases they won’t be.

The biggest problem with log analytics is that what can be analyzed must be always logged. You need to know what information you need for root cause in advance. How often do you know what you need in advance? It is what you don’t know, have not thought about, did not instrument, did not log. It is unlikely you will have the information you will need.

Customers don’t want log analytics; customers want solutions to their problems. So what do I propose? I think log analytics is really morphing into a larger discipline.

The Post Log Analytics World

It is Application Analytics that combines logs, metrics, transactions, topology, changes, and more, along with machine learning techniques: where asking about quality of service, application performance, business and IT KPIs is a click away.

This approach must be combined with smart instrumentation, heuristics and even crowd-sourced knowledge that points to anomalies, suppresses noise and reveals important attributes without constantly collecting terabytes of data.

How do I understand what I don’t know or have not collected yet? How do I know what questions to ask?

Essentially Application Analytics is about managing risks lurking within application and IT infrastructures which are inherently complex and “broken”.

Log Analytics is dead, not because is not useful, but because it must quickly evolve into the next level.

Albert Mavashev is Chief Technology Officer at jKool.

The Latest

October 22, 2018

Embracing DevOps at enterprise scale requires a seismic shift in the way an organization plans, builds, tests, releases, and manages applications. Here are four ways to ensure your enterprise DevOps transformation is a success ...

October 18, 2018

Are applications teams prepared to manage the chaos arising from an ever-growing landscape of heterogeneous deployment types? A recent survey of application and operations professionals sought to better understand how the industry is shifting and what the future of DevOps might look like. Here is what the survey uncovered ...

October 16, 2018

More than half of organizations have a dedicated DevOps team to help them better implement agile strategies, accelerate release cycles and ensure continuous development. However, databases have a habit of holding DevOps back ...

October 15, 2018

Test Environment Management can save organizations close to $10,000 for each release, yet only four percent of large enterprises have fully integrated TEM processes into organizational DNA, according to the 2018 Test Environment Management Survey released by EMA and Plutora ...

October 11, 2018

Agile is indeed expanding across the enterprise and there was a significant jump from last year to this year in the percentage of respondents who indicated that all or almost all of their teams were agile, according to the State of Agile 2018 report from CollabNet ...

October 09, 2018

Adopting a modern application architecture is critical to business success and a significant driver of profit growth in today’s digital economy, according to the results of a global survey of IT and business executives released by CA Technologies and conducted by Frost & Sullivan ...

October 04, 2018

How do you integrate tools to enable shift-left performance? The following tools will simplify maintenance, can be managed in a centralized way, and provide an easy-to-use UI to comprehend results ...

October 03, 2018

Focusing at the API layer of an application can help enable a scalable testing practice that can be efficiently executed as part of an accelerated delivery process, and is a practice that can be adopted and enabled at the earliest possible stages of development — truly shifting left functional testing. But what about performance testing? How do we enable the shift left of nonfunctional testing? Here, we explore what this means and how to enable it in your organization ...

October 01, 2018

As businesses look to capitalize on the benefits offered by the cloud, we've seen the rise of the DevOps practice which, in common with the cloud, offers businesses the advantages of greater agility, speed, quality and efficiency. However, achieving this agility requires end-to-end visibility based on continuous monitoring of the developed applications as part of the software development life cycle ...

September 27, 2018

Imagine that you are tasked with architecting a mission-critical cloud application. Or migrating an on-premise app to the cloud. You may ask yourself, "how do the cloud savvy companies like Airbnb, Adobe, SalesForce, etc. build and manage their modern applications?" ...

Share this