The Basics and Challenges of Kubernetes Logging and Monitoring

Learning Objectives

  • Discover why Kubernetes logging is a challenge
  • Learn Kubernetes logging basics
  • Explore the three Kubernetes logging types

No other container orchestration tool is as popular as Kubernetes, so it’s become the de facto standard in automation, management, monitoring and logging of containers. Kubernetes manages containers, but you also need to configure it for monitoring and logging. Several native logging options exist in Kubernetes so that you can track the health of your container environment. Native logging solutions can have too much overhead, so some administrators choose to work with third-party solutions. If you decide to work with Kubernetes logging, we discuss the basic functionality and commands to set up monitoring and view log information from the command line. 


Why is Kubernetes logging a challenge?

When you think of logging, you think of simple error information sent to a file or database so that you can review it later, but logs contain valuable data used in analysis and monitoring of an environment. These logs can also be used with analytics and visualization tools as a way to monitor the health of the environment. With containers, logging is more of a challenge than basic application logging due to the many moving parts and resources. When containers are deleted or crash, the system cleans up the container, creates a new one, and logs are removed along with the deleted container. Any log data captured for anomalies before deletion are removed.

If you’re familiar with virtual server monitoring, the challenges of logging Kubernetes events are similar. Virtual servers spin up and down as needed, and once the server is destroyed any events contained within them are also deleted. Containers also have their own events, and when administrators or Kubernetes deletes container logs, any data contained within them that log anomalies and reasons why the container failed are also destroyed.

Regardless of the challenges, it’s important for organizations to set up monitoring to ensure stability of the environment. There are a few options for logging Kubernetes events:

  • Use a centralized logging solution. A third-party solution will log pods, clusters and their nodes along with their resources within the Kubernetes system.
  • Customize a solution. You could build your own, but it’s a lot of overhead on developers and could take months to build, test, deploy, and debug. Even if you build your own, it’s possible that this solution won’t function well and possibly won’t integrate with other solutions.
  • Use vendor-specific cloud provider solutions. AWS, GCP, and Azure have their own solutions, but working with a specific vendor locks you into that solution. If you decide to switch vendors, you would need to provision new logging solutions.

Of the above three solutions, the most efficient, scalable, and flexible solution is using a log monitoring tool that works in the cloud. Since it’s likely that your containerized environment will run in the cloud, a cloud-hosted logging solution is quicker to provision, configure, and start monitoring events compared to the others. A third-party solution is also vendor agnostic. Using a vendor-specific option is also quick to provision, but it’s locked to the vendor’s platform. 

Not being tied to a specific vendor is important in enterprise environments. In an enterprise environment, it’s not uncommon for administrators to use a secondary provider as a disaster recovery hot site where data and applications are replicated to the secondary provider should the main provider suffer from catastrophic failure. In this case, a secondary vendor-specific logging solution would be necessary. With a vendor \-agnostic solution, you can use the logging tool in both environments.

Kubernetes Logging Basics

Servers write application and environment events to logs in a configured text file. For example,  Kubernetes logs are stored in Linux to files such as /var/log/containers/app.log. This works well for a server environment, but containers work differently than a simple application running on an operating system.

With Kubernetes, pods dynamically spin up or down, and they can move across nodes in a cluster. Most environments run several containers across clusters, and these containers store their own logs. This means that you should have a solution that collects and aggregates logs into one location.

Kubernetes has a logging framework that can be used to capture standard output (stdout) and standard errors (stderr) from each container and send it to a central log. Logs are managed by the Kubernetes service, and administrators can view each container log by typing “kubectl logs <pod> -c <name of container>" at the command line.

Another option is to configure Kubernetes to use application-level logging. This configuration relies on each container to have its own logging component configured individually, but it’s much less convenient than using the previous solution. Should any containers change configurations, it would require changes to the logging solution.

What are the three types of container logs?

Containerized environments have several layers unlike traditional environments where an application simply runs on an operating system installed on a virtual machine or physical server. In Kubernetes, the three main layers are the containers themselves, the nodes that run the containers, and the clusters that host multiple nodes. For a full overview of all errors, logging all three are necessary and should be aggregated where a visualization tool can be used to analyze the output. To understand Kubernetes logging, here is a brief overview of each logging type.


Pod Logging

Every pod captures errors and logs generated by the application. As soon as you configure and deploy a pod, you can see its logs in Kubernetes from the command line. Suppose that you have a pod with the metadata name set as “myapplication.” Type the following in the command line after you’ve deployed it:


kubectl logs myapplication

The output from the above command would show logs generated by the application container. For each container, you have a log file linked to it.


Node Logging

Containers running within nodes write their information to stdout or stderr (depending on the type of event) streams. These streams get picked up by the Kubernetes kubelet service running on the node. The kubelet service outbound information is handled by a configured logging driver, which sends information to the log file. The file is normally located in the /var/log/containers directory.

The kubelet service runs at the operating system level, so it logs the health of the underlying application environment. It also handles logging rotation so that log files do not fill up the available storage space. To collect kernel logs, you need systemd installed on your node. System-level events can be found in these logs to identify issues with the kernel, which could lead to more serious downtime. 

Node logs can be viewed using the following command:


journalctl


Any changes to the system will be seen in these logs. For example, if administrators make changes to environment variables, it will show in node logs.


Cluster Logging

To log entire clusters, you need a third-party log aggregator. Kubernetes has no native support for cluster-level logging and aggregation. This is where Mezmo, formerly known as LogDNA, can benefit administrators responsible for Kubernetes monitoring and logging. A few other alternatives administrators can use include:

  • Configure a node-level logging agent that executes on every node.
  • Use a sidecar container configured specifically for logging all application nodes.
  • Expose logs to an aggregator via the application.

Why should enterprise containerized environments use cloud monitoring and logging?

The three logging types can be used if you choose to manually configure each one, but this requires considerable overhead to maintain all three. Should any configurations change, it could affect the way your monitoring and logging solution runs. To overcome the many challenges with node and container logging, and to get a centralized cluster logging solution, a third-party tool that aggregates information across your entire environment will simplify Kubernetes monitoring.

Several advantages of a centralized cloud logging solution for Kubernetes include:

  • Scalability: To support multiple clusters and nodes, you need the resources to support it in a production environment. Containers in a staging environment generate very few logs, but a deployment to a busy production environment will generate potentially millions of logs. The environment must scale, which requires additional overhead for administrators. A third-party cloud solution has scalability built in.
  • Alerts: Basic logging is not enough for good monitoring. Administrators need a way to get alerts on anomalies and critical errors. In a centralized environment, alerts across several clusters and nodes give analysts a clearer picture of what could be causing issues on the system.
  • Central overview: Fragmented logs and alerts are difficult to manage. With a centralized solution, administrators can view every node and application within a cluster in one location so that they can analyze issues faster.

It’s time to let data charge