Logging in Kubernetes: How It Works, and How to Choose a Logging Solution
November 09, 2020

Roopak Parikh
Platform9

The adoption of Cloud Native Architecture and use of microservices is starting to take off and Kubernetes is at the forefront of the technologies enabling this trend. However, while Kubernetes provides you with many features that make deploying and running microservices easy, there are few additional services that need to be enabled in order for developers/devops to be productive and make applications maintainable. Logging is one such vital service.

When it comes to logging, the Kubernetes ecosystem offers many choices for you to collect and manage logs.

This blog discusses different aspects about logging in Kubernetes, while identifying key considerations to keep in mind when developing a Kubernetes logging strategy.

Let's start by discussing the essentials of Kubernetes logging.

Types of Log Data in Kubernetes

In a Kubernetes environment, there are two main types of logs to manage:

Kubernetes logs

Kubernetes generates its own logs for the API server, etcd, schedulers and kube-proxy. These logs are useful for troubleshooting or tracking down issues related to Kubernetes itself. Most of these logs are stored on Kubernetes master nodes in the host operating system's log directory (on most Linux distributions, that's /var/log). Other logs associated with the worker node components such as kubelet and kube-proxy are stored on the worker node itself. The actual location of the logs and associated log rotation policy may depend on the vendor or toolkit you are using to manage Kubernetes cluster.

Application Logs

These are the logs from applications hosted in Kubernetes. Typically, application log data is written to standard output (stdout) and standard error (stderr) streams inside the container (or containers. While this may be true for most modern application, there are few challenges to address :

■ Some containers may run multiple processes: typically when you are porting over large monolith installation to Kubernetes. This may mean that there are multiple process logs you may want to access.

■ Even if you have a single process container, it is possible that the process itself is not writing to stdout/stderr and may be writing to a log file.

I will discuss in detail below how to work around these architectural limitations in the section addressing sidecar logs solution.

Accessing Kubernetes Log Data

There are different ways to access log data in Kubernetes. The most straightforward — but also the least efficient — is to access the raw log data. For application logs, that means logging into the containers that host them and viewing the stdout and stderr streams directly from there. For Kubernetes logs, it means SSH'ing into the nodes and reading log files straight from the /var/log directory. If you are using docker as the container runtime, the application logs are typically stored in /var/lib/docker/containers// directory.

Tip: Kubernetes administrators, you should pay particular attention to make sure you set up a logrotate policy for the container logs, if you are not careful you may run out of disk-space. Similarly, note that container logs disappear after the container is evicted from the node. 

A slightly more centralized approach for accessing application logs is to use kubectl, which allows you to view logs for individual containers with a command like:

kubectl logs pod-name

This saves you from the hassle of having to access containers individually to view their logs, although it still only allows you to view log data for one pod at a time.

You can also use the kubectl get eventscommand to view a list of what Kubernetes refers to as events. Kubernetes logs an event when certain actions are performed, like a container starting, or when problems are encountered, like a workload that runs out of resources.

Events provide a quick overview of cluster-level health without requiring you to dive into actual log files; the downside is that there is minimal context associated with each event, which makes it hard to use events alone to monitor cluster health and troubleshoot performance problems.

Beyond these resources, Kubernetes itself provides no functionality for log aggregation, analysis or management. Nor does it store logs for you persistently. It expects you to use third-party logging solutions for those purposes.

Using Logging Aggregation with Kubernetes

The vanilla kubectl logs approach is clearly not scalable for large clusters and complex applications in production. For production of large scale environments, it is highly recommended to use a log aggregation system.

Kubernetes is compatible with a variety of third-party logging tools. If your logging solution of choice supports Linux and Linux-based applications, it almost certainly works with Kubernetes, too. Common logging systems and commercial systems have recipes to consume logs from Kubernetes.

That said, Kubernetes doesn't go out of its way to make it easy to collect log data and forward it to a third-party logging platform. Instead, you have to find a way to forward log data from your applications and/or Kubernetes itself to the logging platform you want to use to aggregate, analyze and store your log data.

Using logging agents to collect log data from Kubernetes itself is straightforward enough. Because those log files live in /var/log on the nodes, you can simply deploy logging agents directly to the nodes and collect log data from there.

For application logging, things are more complicated, because you can't deploy a conventional log agent directly inside a container to collect data from stdout and stderr. Instead, Kubernetes admins typically rely on one of three approaches for collecting application log data:

Node agent logging: As discussed earlier, this relies on an agent at the node level. Most common agents now are Kubernetes-aware and annotate log files with container, pod, namespace names so that sorting and searching logs by pod name or container name becomes easier. The agents are also deployed as Kubernetes artifacts. This is the most common approach.

Sidecar logging: Within each pod, they run a so-called sidecar container. The sidecar container hosts a logging agent, which is responsible for collecting log data from other containers in the pod and forwarding it to an external log management platform or stack.

Direct-from-application logging: You can design your application in such a way that it streams log data directly to an external logging endpoint, without relying on an agent inside the pod to collect it. Because this approach in most cases would require changes to the application itself and the way it exposes log data, it's a less common solution.

How to Choose a Kubernetes Logging Solution

Because Kubernetes can be integrated with any mainstream, Linux-compatible logging stack, it may be hard to decide which logging solution is best. Here are some factors to consider:

No Lock-in: Kubernetes itself is open-source and gives you broad flexibility in designing cluster architecture. Ideally, your logging platform will also be flexible, so that it doesn't lock you into a certain technology stack or vendor ecosystem.

Log agent performance: If you run logging agents in sidecar containers, you add to the overall workload that your Kubernetes cluster has to support. For that reason, you'll want your logging agents to be lightweight and high-performance. Tip: certain agents are easy to configure to filter logs at the source, keeping the volume low.

Log aggregation and centralization: To get the most out of Kubernetes logs, you should be able to not just to view logs for individual applications and components, but also compare log data across the entire cluster. The capability to aggregate all logs into a central location is critical for this.

Log storage and rotation: A Kubernetes cluster that runs for years will generate a lot of logs. Realizing it won't be practical in many cases to store every log forever, you'll want a logging solution that makes it easy to define how long logs are stored. Ideally, you'll be able to configure these policies in a granular way so that you can keep certain types of logs longer than others if you need. The ability to move older logs to a storage location where it costs less to keep them (such as a “cold” storage tier on a cloud storage service) is also beneficial, especially in situations where you need to keep Kubernetes logs around for years due to compliance requirements.

Roopak Parikh is Co-Founder and CTO of Platform9
Share this

Industry News

February 02, 2023

Red Hat announced a multi-stage alliance to offer customers a greater choice of operating systems to run on Oracle Cloud Infrastructure (OCI).

February 02, 2023

Snow Software announced a new global partner program designed to enable partners to support customers as they face complex market challenges around managing cost and mitigating risk, while delivering value more efficiently and effectively with Snow.

February 02, 2023

Contrast Security announced the launch of its new partner program, the Security Innovation Alliance (SIA), which is a global ecosystem of system integrators (SIs), cloud, channel and technology alliances.

February 01, 2023

Red Hat introduced new security and compliance capabilities for the Red Hat OpenShift enterprise Kubernetes platform.

February 01, 2023

Jetpack.io formally launched with Devbox Cloud, a managed service offering for Devbox.

February 01, 2023

Jellyfish launched Life Cycle Explorer, a new solution that identifies bottlenecks in the life cycle of engineering work to help teams adapt workflow processes and more effectively deliver value to customers.

January 31, 2023

Ably announced the Ably Terraform provider.

January 31, 2023

Checkmarx announced the immediate availability of Supply Chain Threat Intelligence, which delivers detailed threat intelligence on hundreds of thousands of malicious packages, contributor reputation, malicious behavior and more.

January 31, 2023

Qualys announced its new GovCloud platform along with the achievement of FedRAMP Ready status at the High impact level, from the Federal Risk and Authorization Management Program (FedRAMP).

January 30, 2023

F5 announced the general availability of F5 NGINXaaS for Azure, an integrated solution co-developed by F5 and Microsoft that empowers enterprises to deliver secure, high-performance applications in the cloud.

January 30, 2023

Tenable announced Tenable Ventures, a corporate investment program.

January 26, 2023

Ubuntu Pro, Canonical’s comprehensive subscription for secure open source and compliance, is now generally available.

January 26, 2023

Mirantis, freeing developers to create their most valuable code, today announced that it has acquired the Santa Clara, California-based Shipa to add automated application discovery, operations, security, and observability to the Lens Kubernetes Platform.

January 25, 2023

SmartBear has integrated the powerful contract testing capabilities of PactFlow with SwaggerHub.

January 25, 2023

Venafi introduced TLS Protect for Kubernetes.