See how you can save 70% of the cost by reducing log volume and staying compliant.

How to Monitor Docker Containers

Learning Objectives

• Understand the challenges of Docker container monitoring

• Understand how to monitor Docker containers

• Understand the pros and cons of using third-party monitoring software

Docker containers offer an innovative and modern way to package applications portable and reproducible. DevOps teams can better scale their application deployments by dynamically responding to changes. They also enable them to patch existing systems and handle unexpected loads without operational overhead.

The highly dynamic nature of containers poses challenges in tracking their metrics, health, and performance counters. Monitoring and alerting for Docker containers must be able to scale and provide accurate and sufficient information.

This article serves as a step-by-step guide to monitoring Docker containers. We discuss the most practical ways to monitor Docker containers, then explain the pros of using a third-party logging or monitoring tool like Mezmo.

Let’s get started.

Docker Container Monitoring Challenges

Monitoring Docker containers is tracking the metrics and process information of running containers. Each running container represents an isolated environment running in a lightweight and portable Linux namespace. Docker is a tool that helps orchestrate the process of building, packing, and running containers. You can run Docker containers from prepackaged images built using definition files (Dockerfiles) that describe the steps to package an application.

The critical issue is that traditional monitoring tools like Cacti and Nagios were designed for different kinds of systems altogether (mainly monolithic applications that run either on VMs or the host machine). Utilizing those tools for monitoring Docker containers is not immensely beneficial unless you install specific addons or follow complex steps to make them work.

When it comes to monitoring Docker containers, these tools must adapt and be aware of how Docker shares resources across the host system and the context of a request that passes through the system. Docker containers are also temporary, making measuring and correlating related events even more difficult.

Monitoring Docker Containers

To successfully monitor Docker containers, the industry-standard solution is better log management. Each application packaged in a Docker container uses a standard logger tool to send events to preconfigured exporters. The most basic and critical ones are logging into the standard error and standard output streams. Those logs can be picked up and forwarded by Docker, and they can be inspected by the docker container logs command.

For example, let’s take a look at a simple Node.js application that serves some endpoints:

server.js "use strict"; const express = require("express"); const logger = require("./logger"); const PORT = process.env.PORT || 8080; const HOST = process.env.HOST || "localhost"; const app = express(); app.get("/", (req, res) => { res.send("Hello World!"); logger.info("Server Sent: Hello World!"); }); app.get("/throw", (req, res) => { throw new Error("Invalid"); }); app.use((err, req, res, next) => { res.status(500).send("Internal Server Error"); logger.error( `${err.status || 500} - ${res.statusMessage} - ${err.message} - ${ req.originalUrl } - ${req.method} - ${req.ip}` ); }); app.use((req, res, next) => { res.status(404).send("Page Not Found"); logger.error( `400 || ${res.statusMessage} - ${req.originalUrl} - ${req.method} - ${req.ip}` ); }); app.listen(PORT, HOST, () => { console.log(`Server Running on http://${HOST}:${PORT}`); logger.info(`Server Running on http://${HOST}:${PORT}`); });
logger.js const { createLogger, transports, format } = require("winston"); const logger = createLogger({ format: format.combine( format.timestamp({ format: "YYYY-MM-DD HH:mm:ss:ms" }), format.printf((info) => `${info.timestamp} ${info.level}: ${info.message}`) ), transports: [new transports.Console()], }); module.exports = logger;

This uses Express to set up a server with some endpoints as well as the winston logger that forwards the logs to the console. We’ll use the following Dockerfile for this application:

Dockerfile FROM node:14-alpine ENV HOST='0.0.0.0' ENV PORT=8080 RUN addgroup app && adduser -S -G app app RUN mkdir -p /app && chown -R app:app /app USER app WORKDIR /app COPY package*.json ./ RUN npm install COPY . . EXPOSE 8080 CMD ["node", "server.js"]

This creates a user, specifies the working folder of the application, installs the dependencies, and starts the server.

One way to add container-specific information for tracking purposes is to add specific log options when you issue the run command. You can configure the log format of the Docker container so that you can get a unique view of the running application by using --log-opt:

> docker build --pull --rm -f "Dockerfile" -t nodejsexample:latest "." > docker run -p 8080:8080 --log-driver=fluentd --log-opt fluentd-address=localhost:24224 --log-opt tag="{{.ImageName}}/{{.Name}}/{{.ID}}" nodejsexample:latest

You can also add labels and env variables that help with applying specific information for tracking purposes:

> docker run --log-driver=fluentd --log-opt fluentd-address=localhost:24224 -p 8080:8080 --log-opt tag="{{.ImageName}}/{{.Name}}/{{.ID}}" --log-opt labels=production --log-opt env=env1,env2 nodejsexample:latest

Docker also offers support for different logging drivers if you want to integrate it with external tools. The main idea is to use specialist tools that aggregate logs from containers and send them to a monitoring service.

If you don’t want to use a logging driver other than the default one, you also have the option to use a dedicated logging container. In this case, you use a configured container to accept logs piped from the logging output of containers and send them to a service.

Mezmo, for example, offers that option using their Docker image. It hooks into the Docker host’s /var/run/docker.sock socket file and allows you to read the contents of the logs before sending them to the Mezmo platform. Overall, both approaches are flexible and easy solutions for handling and sending the output of the logs.

You can also improve your monitoring by including observability metrics from the container. Next, we will explain this concept as it’s more relevant in a Kubernetes environment.

Monitoring Containers in Kubernetes

Kubernetes takes the idea of container orchestration to the next level. It is an extensive system that manages a set of cluster nodes, with each node acting as a server for hosting multiple pods. Each pod can contain one or more containers running together.

Kubernetes monitoring involves reporting tools and processes that proactively monitor all of the cluster's containers, pods, and nodes. It also introduces non-trivial logistical problems, given that K8s generally run a very dynamic environment and can schedule many containers, relocate them between nodes, and scale new nodes.

One idea that works well within a K8s environment includes observability metrics. This data represents insights that help administrators see and understand the internal state of a system based on its external outputs.

In our example, we can include OpenTelemetry tracing by plugging in the following instrumentation code within the winston logger:

logger.js const { NodeTracerProvider } = require('@opentelemetry/sdk-trace-node'); const { WinstonInstrumentation } = require('@opentelemetry/instrumentation-winston'); const { registerInstrumentations } = require('@opentelemetry/instrumentation'); const { createLogger, transports } = require("winston"); const provider = new NodeTracerProvider(); provider.register(); registerInstrumentations({ instrumentations: [ new WinstonInstrumentation({ logHook: (span, record) => { record['tracing.node.example'] = provider.resource.attributes['node.example']; }, }), ], }); const logger = createLogger({ transports: [new transports.Console()], }); module.exports = logger;

As you can see, the code that you have to add is minimal, but it performs many functions behind the scenes. The benefit of this approach is that it abstracts away the collection and management of tracing metrics used for observability purposes. This way, the development team does not have to overthink configuration or unique tagging, as the library will handle all the inner details. You can also configure exporters to transmit the tracing data to external services like Jaeger or Prometheus. Making these configurations makes it easier to include monitoring 

on every running application or container that gets deployed in the Kubernetes cluster.

Pros of Using Third-Party Monitoring Tools

Given the current setup for monitoring Docker containers, adopting a third-party monitoring tool is appealing instead of relying on an on-premises open-source option. Sure, there are many good open-source tools for Docker monitoring, like cAdvisor and Prometheus, but they have a few limitations compared to external vendors. Let's discuss why an organization should opt for a third-party monitoring tool:

  • Enterprise support: Enterprises need the best technical help, documentation, and issue resolution for monitoring infrastructure. They won't risk leaning on unreliable open source support channels that don't guarantee assistance.
  • Premium integrations: Third-party log management and monitoring tools often offer popular open-source and proprietary systems premium integrations. The enterprises that provision those platforms prioritize feature requests for new or existing integrations to fulfill their requirements.
  • Scalability and performance: When you pay a third-party vendor to handle your monitoring data, you expect them to be able to handle the load. Self-hosted open source tools might not scale very well unless you spend considerable time planning the architecture. If you use a third-party monitoring tool like Mezmo instead, you offload this maintenance and the management risks of scaling logging and monitoring requirements. Mezmo also offers on-premises solutions that work with your infrastructure and cater to your compliance requirements – which is another good reason to consider a third-party monitoring tool.

We are not saying that you should not consider (or adopt) an open-source monitoring tool for monitoring Docker containers. Instead, you should always consider the inherent challenges of open-source software to make informed business decisions.

Docker Container Monitoring Is Easier Now

Monitoring Docker containers is becoming a more mainstream process. When Docker was still in its infancy, there was tremendous concern about how to scale monitoring and alert in a container environment – especially with the advent of Kubernetes. Now that the process is more mature, third-party monitoring tools like Mezmo can offer a distinct advantage in providing multi-cluster container log management under a single view.

It’s time to let data charge