Container Logging & DevOps: the Future of Kubernetes Integration

4 MIN READ
MIN READ

I hosted a webinar where I covered why logging is important, how to choose a logging provider. And then shared our experience of setting up logging on Kubernetes containers, the Kubernetes logging framework and the logging best practices we've implemented internally and supported our customers who run Kubernetes in production.LogDNA's ultimate goal is to provide a useful tool for developers to be able to quickly access logs, to get in, get your logs and get back to work. Our UI, UX, speed of search and simplicity in integration is built to solve this one problem. Our Kubernetes, or container logging integration is a prime example of this, with just two kubectl lines, your container logs are sent to LogDNA. There's no configuration, no messy sidecar container that consumes additional resources, dependencies on fluentd.

Kubernetes Logging Best Practices

  1. Log to stdout and separate errors to stderror: while this process is standard practice for moving to a containerized environment, many apps still log to file. You can take advantage of the Kubernetes logging framework and automatically take those logs to stream it to necessary places.
  2. Separate dev and prod into multiple clusters: this is to help you not accidentally delete a critical production pod by accident. Then you can use use-context kubectl command to switch between the two clusters easily.
  3. Move variables into environment specific ConfigMap: Instead of maintaining multiple YAML files for dev and production environment variables, create a ConfigMap which is a giant YAML file of key/value pairs and take those and separate out the staging and production environment files.
  4. Avoid using sidecars for logging: Generally speaking a pod has one container. A sidecar for logging is a second container in the same pod to log the output from the first container so it takes up unnecessary resources from a pod level. There are some scenarios where you can't avoid using it like if you don't have control of the application and it writes logs to files. Or you want to hide the output from Kubernetes logging framework.
  5. Test locally and ephemerally prior to deployment: Minikube installs on your own laptop, while doesn't mimic everything that kubernetes does, it gives you a good idea of how it will run on production.

Default Kubernetes Logging Framework

Kubernetesloggingframework

The logs are ephemeral: when containers output logs to stdout, the containers automatically dumps the files to /var/log. When the pods are evicted, crashed, deleted or scheduled on a different node, the logs from the container is gone. This is different than logging on traditional servers or virtual machines. When your apps die on your virtual machine, you don't lose the logs until you delete it. In Kubernetes, it cleans up after itself and the logs will not persist. Understanding the ephemeral nature of default logging on Kubernetes is important because it points to a centralized log management solution.

Logging Solutions for Kubernetes

Fluentdvslogdna

Kubernetes logging with Fluentd

When we first started using Kubernetes, we, like many others started with Fluentd. You can find many guides for setup, even though there might be 30+ steps. It is easy to get started, there are plenty of examples of configs online. Fluentd works well in low volume but the challenge is with higher volume. Scaling becomes challenging, mainly the efforts of scaling ElasticSearch, learning how to properly architect the shards and indices and becoming an expert of ElasticSearch operations.

Kubernetes logging with LogDNA

Similar to the fluentd agent, you can install the LogDNA agent to read and stream the log lines from /var/log. Since the agent streams the logs to LogDNA, it doesn't take additional resources in the pods and nodes. You can then focus on scaling your product instead of spending your time scaling the logging infrastructure. The agent runs per node instead of per pod which is much more efficient.We have kept things simple with our kubernetes integration and all you need to do is install our agent using two kubectl commandsWe'll automatically extract and index the Kubernetes metadata (pod name, container name, namespace, node) and auto parse all the common log types without the need to manually identify them like you would using fluentd. You can also set up your custom log lines to be parsed and indexed by us using our custom log parsing interface.Try our Kubernetes integration today with a 14-day free trial. We look forward to hearing your feedback.

Table of Contents

    Share Article

    RSS Feed

    Next blog post
    You're viewing our latest blog post.
    Previous blog post
    You're viewing our oldest blog post.
    Mezmo + Catchpoint deliver observability SREs can rely on
    Mezmo’s AI-powered Site Reliability Engineering (SRE) agent for Root Cause Analysis (RCA)
    What is Active Telemetry
    Launching an agentic SRE for root cause analysis
    Paving the way for a new era: Mezmo's Active Telemetry
    The Answer to SRE Agent Failures: Context Engineering
    Empowering an MCP server with a telemetry pipeline
    The Debugging Bottleneck: A Manual Log-Sifting Expedition
    The Smartest Member of Your Developer Ecosystem: Introducing the Mezmo MCP Server
    Your New AI Assistant for a Smarter Workflow
    The Observability Problem Isn't Data Volume Anymore—It's Context
    Beyond the Pipeline: Data Isn't Oil, It's Power.
    The Platform Engineer's Playbook: Mastering OpenTelemetry & Compliance with Mezmo and Dynatrace
    From Alert to Answer in Seconds: Accelerating Incident Response in Dynatrace
    Taming Your Dynatrace Bill: How to Cut Observability Costs, Not Visibility
    Architecting for Value: A Playbook for Sustainable Observability
    How to Cut Observability Costs with Synthetic Monitoring and Responsive Pipelines
    Unlock Deeper Insights: Introducing GitLab Event Integration with Mezmo
    Introducing the New Mezmo Product Homepage
    The Inconvenient Truth About AI Ethics in Observability
    Observability's Moneyball Moment: How AI Is Changing the Game (Not Ending It)
    Do you Grok It?
    Top Five Reasons Telemetry Pipelines Should Be on Every Engineer’s Radar
    Is It a Cup or a Pot? Helping You Pinpoint the Problem—and Sleep Through the Night
    Smarter Telemetry Pipelines: The Key to Cutting Datadog Costs and Observability Chaos
    Why Datadog Falls Short for Log Management and What to Do Instead
    Telemetry for Modern Apps: Reducing MTTR with Smarter Signals
    Transforming Observability: Simpler, Smarter, and More Affordable Data Control
    Datadog: The Good, The Bad, The Costly
    Mezmo Recognized with 25 G2 Awards for Spring 2025
    Reducing Telemetry Toil with Rapid Pipelining
    Cut Costs, Not Insights:   A Practical Guide to Telemetry Data Optimization
    Webinar Recap: Telemetry Pipeline 101
    Petabyte Scale, Gigabyte Costs: Mezmo’s Evolution from ElasticSearch to Quickwit
    2024 Recap - Highlights of Mezmo’s product enhancements
    My Favorite Observability and DevOps Articles of 2024
    AWS re:Invent ‘24: Generative AI Observability, Platform Engineering, and 99.9995% Availability
    From Gartner IOCS 2024 Conference: AI, Observability Data, and Telemetry Pipelines
    Our team’s learnings from Kubecon: Use Exemplars, Configuring OTel, and OTTL cookbook
    How Mezmo Uses a Telemetry Pipeline to Handle Metrics, Part II
    Webinar Recap: 2024 DORA Report: Accelerate State of DevOps
    Kubecon ‘24 recap: Patent Trolls, OTel Lessons at Scale, and Principle Platform Abstractions
    Announcing Mezmo Flow: Build a Telemetry Pipeline in 15 minutes
    Key Takeaways from the 2024 DORA Report
    Webinar Recap | Telemetry Data Management: Tales from the Trenches
    What are SLOs/SLIs/SLAs?
    Webinar Recap | Next Gen Log Management: Maximize Log Value with Telemetry Pipelines
    Creating In-Stream Alerts for Telemetry Data
    Creating Re-Usable Components for Telemetry Pipelines
    Optimizing Data for Service Management Objective Monitoring
    More Value From Your Logs: Next Generation Log Management from Mezmo
    A Day in the Life of a Mezmo SRE
    Webinar Recap: Applying a Data Engineering Approach to Telemetry Data
    Dogfooding at Mezmo: How we used telemetry pipeline to reduce data volume
    Unlocking Business Insights with Telemetry Pipelines
    Why Your Telemetry (Observability) Pipelines Need to be Responsive
    How Data Profiling Can Reduce Burnout
    Data Optimization Technique: Route Data to Specialized Processing Chains
    Data Privacy Takeaways from Gartner Security & Risk Summit
    Mastering Telemetry Pipelines: Driving Compliance and Data Optimization
    A Recap of Gartner Security and Risk Summit: GenAI, Augmented Cybersecurity, Burnout
    Why Telemetry Pipelines Should Be A Part Of Your Compliance Strategy
    Pipeline Module: Event to Metric
    Telemetry Data Compliance Module
    OpenTelemetry: The Key To Unified Telemetry Data
    Data optimization technique: convert events to metrics
    What’s New With Mezmo: In-stream Alerting
    How Mezmo Used Telemetry Pipeline to Handle Metrics
    Webinar Recap: Mastering Telemetry Pipelines - A DevOps Lifecycle Approach to Data Management
    Open-source Telemetry Pipelines: An Overview
    SRECon Recap: Product Reliability, Burn Out, and more
    Webinar Recap: How to Manage Telemetry Data with Confidence
    Webinar Recap: Myths and Realities in Telemetry Data Handling
    Using Vector to Build a Telemetry Pipeline Solution
    Managing Telemetry Data Overflow in Kubernetes with Resource Quotas and Limits
    How To Optimize Telemetry Pipelines For Better Observability and Security
    Gartner IOCS Conference Recap: Monitoring and Observing Environments with Telemetry Pipelines
    AWS re:Invent 2023 highlights: Observability at Stripe, Capital One, and McDonald’s
    Webinar Recap: Best Practices for Observability Pipelines
    Introducing Responsive Pipelines from Mezmo
    My First KubeCon - Tales of the K8’s community, DE&I, sustainability, and OTel
    Modernize Telemetry Pipeline Management with Mezmo Pipeline as Code
    How To Profile and Optimize Telemetry Data: A Deep Dive
    Kubernetes Telemetry Data Optimization in Five Steps with Mezmo
    Introducing Mezmo Edge: A Secure Approach To Telemetry Data
    Understand Kubernetes Telemetry Data Immediately With Mezmo’s Welcome Pipeline
    Unearthing Gold: Deriving Metrics from Logs with Mezmo Telemetry Pipeline
    Webinar Recap: The Single Pane of Glass Myth
    Empower Observability Engineers: Enhance Engineering With Mezmo
    Webinar Recap: How to Get More Out of Your Log Data
    Unraveling the Log Data Explosion: New Market Research Shows Trends and Challenges
    Webinar Recap: Unlocking the Full Value of Telemetry Data
    Data-Driven Decision Making: Leveraging Metrics and Logs-to-Metrics Processors
    How To Configure The Mezmo Telemetry Pipeline
    Supercharge Elasticsearch Observability With Telemetry Pipelines
    Enhancing Grafana Observability With Telemetry Pipelines
    Optimizing Your Splunk Experience with Telemetry Pipelines
    Webinar Recap: Unlocking Business Performance with Telemetry Data
    Enhancing Datadog Observability with Telemetry Pipelines
    Transforming Your Data With Telemetry Pipelines