What is Logspout?

4 MIN READ
MIN READ

Logspout is an open source log router designed specifically for Docker container logs. If you’ve ever looked into log management for Docker, chances are you’ve heard of it. Logspout is a container that collects logs from all other containers running on the same host, then forwards them to a destination of your choice. This lets you send logs to an HTTP/S server, syslog server, or other endpoint without having to monitor files or modify your host systems.

Mezmo, formerly known as LogDNA, provides a version of Logspout that routes logs directly to Mezmo's ingestion servers, making it even easier to deploy. We’ll explain how the Mezmo Logspout container works, how it differs from the Mezmo agent, and when you might want to use one over the other.

How is Logspout Different From the Mezmo Agent?

Mezmo Logspout works by connecting directly to the host’s Docker socket and reading logs that container print to stdout or stderr. It then routes these messages to a destination of your choice through an HTTP endpoint. It is streaming based so any hiccups or spikes will likely result in data loss.

The Mezmo agent works by monitoring files for new events and sending those events to Mezmo. This offers a high level of compatibility to not just Docker logs but across platforms. Since it is file based, it includes recovery and retry mechanisms to minimize the loss of data.

Benefits of Logspout

Let’s look at some of the benefits that Logspout offers for shipping logs to Mezmo. If you're used to using the agent, this will give you a better idea of the differences between the Mezmo agent and Mezmo Logspout.

Better Performance

Mezmo Logspout is a lightweight Go application that runs entirely in memory. The Mezmo Logspout image is roughly 50 MB and uses just over 9 MB of RAM on a fresh start. Since it doesn’t read from log files, it can ingest logs from any number of sources without being constrained by disk speed or memory.

Easier Management

Logspout does not need to access any files or directories and only requires access to the Docker socket, which is located at /var/run/docker.sock. Any newly started containers are automatically detected by Logspout without requiring you to restart the container or add a new volume mount. Like the agent, you can change settings through environment variables.

Logspout can also be deployed using most orchestration tools including Kubernetes, Rancher, Docker Swarm, AWS Elastic Container Service, and Docker Compose. Deploying it as a DaemonSet ensures that each node in your cluster runs a single instance of the container. You can find examples for orchestrating Logspout in the project’s readme.

Customizability

Logspout allows you to ship the same logs to multiple destinations in addition to Mezmo. You can change the default Mezmo endpoint to a separate endpoint (such as an on-premise installation), or send the same logs to multiple destinations. We’ve also added support for tags and custom hostnames, so you can search and filter your logs more easily.

Since Mezmo Logspout image is derived from the original Logspout image, you can leverage other features such as multiline logs, custom routes, and the ability to inspect log streams in real-time.

Other Considerations

The key difference between Logspout and agent is that Mezmo Logspout can only read logs from other Docker containers due to its lack of filesystem access. In other words, Logspout cannot log your hosts or non-Docker containers unless you can route them through a separate container. In addition, Logspout only works when the Docker logging driver is set to either journald or json-file. Drivers that don’t work with the docker logs command are not supported.

Like the Mezmo agent, the Logspout container must run as the root user. This is because the container accesses the Docker API directly. This creates a security risk, since any process with direct access to the Docker API can perform any action including creating containers, effectively gaining root access to the host. We explained the security implications of this in detail on our blog post on why the Mezmo agent runs as root.

Lastly, there is an increased risk of data loss due to Logspout only storing logs in memory. If the container stops unexpectedly, any logs that were buffered but not sent will be lost (they could still be on the host if the Docker Logging Driver processed them, but Logspout won’t attempt to re-send them after restarting). You can reduce this risk by:

  • Increasing the memory limit of the container
  • Increasing the buffer size
  • Increasing the flush interval (how frequently it sends buffered logs to Mezmo)
  • Increasing the number of retries in case of a network failure.

When Should You Use Logspout?

To recap, for your log management solution, use Logspout when you

  • Are using Docker or a Docker orchestration tool
  • Don’t need to log host systems or non-Docker applications
  • Want fast performance and a small resource footprint
  • Want to ship logs to multiple destinations
  • Are okay with streaming logs knowing that there is the possibility of losing logs in the event of a container failure

The best way to determine whether the Logspout container is a good fit is to try it. The Mezmo Logspout container is completely open source and can be deployed in seconds. Let us know if you have any questions, comments, or suggestions!

Table of Contents

    Share Article

    RSS Feed

    Next blog post
    You're viewing our latest blog post.
    Previous blog post
    You're viewing our oldest blog post.
    Mezmo + Catchpoint deliver observability SREs can rely on
    Mezmo’s AI-powered Site Reliability Engineering (SRE) agent for Root Cause Analysis (RCA)
    What is Active Telemetry
    Launching an agentic SRE for root cause analysis
    Paving the way for a new era: Mezmo's Active Telemetry
    The Answer to SRE Agent Failures: Context Engineering
    Empowering an MCP server with a telemetry pipeline
    The Debugging Bottleneck: A Manual Log-Sifting Expedition
    The Smartest Member of Your Developer Ecosystem: Introducing the Mezmo MCP Server
    Your New AI Assistant for a Smarter Workflow
    The Observability Problem Isn't Data Volume Anymore—It's Context
    Beyond the Pipeline: Data Isn't Oil, It's Power.
    The Platform Engineer's Playbook: Mastering OpenTelemetry & Compliance with Mezmo and Dynatrace
    From Alert to Answer in Seconds: Accelerating Incident Response in Dynatrace
    Taming Your Dynatrace Bill: How to Cut Observability Costs, Not Visibility
    Architecting for Value: A Playbook for Sustainable Observability
    How to Cut Observability Costs with Synthetic Monitoring and Responsive Pipelines
    Unlock Deeper Insights: Introducing GitLab Event Integration with Mezmo
    Introducing the New Mezmo Product Homepage
    The Inconvenient Truth About AI Ethics in Observability
    Observability's Moneyball Moment: How AI Is Changing the Game (Not Ending It)
    Do you Grok It?
    Top Five Reasons Telemetry Pipelines Should Be on Every Engineer’s Radar
    Is It a Cup or a Pot? Helping You Pinpoint the Problem—and Sleep Through the Night
    Smarter Telemetry Pipelines: The Key to Cutting Datadog Costs and Observability Chaos
    Why Datadog Falls Short for Log Management and What to Do Instead
    Telemetry for Modern Apps: Reducing MTTR with Smarter Signals
    Transforming Observability: Simpler, Smarter, and More Affordable Data Control
    Datadog: The Good, The Bad, The Costly
    Mezmo Recognized with 25 G2 Awards for Spring 2025
    Reducing Telemetry Toil with Rapid Pipelining
    Cut Costs, Not Insights:   A Practical Guide to Telemetry Data Optimization
    Webinar Recap: Telemetry Pipeline 101
    Petabyte Scale, Gigabyte Costs: Mezmo’s Evolution from ElasticSearch to Quickwit
    2024 Recap - Highlights of Mezmo’s product enhancements
    My Favorite Observability and DevOps Articles of 2024
    AWS re:Invent ‘24: Generative AI Observability, Platform Engineering, and 99.9995% Availability
    From Gartner IOCS 2024 Conference: AI, Observability Data, and Telemetry Pipelines
    Our team’s learnings from Kubecon: Use Exemplars, Configuring OTel, and OTTL cookbook
    How Mezmo Uses a Telemetry Pipeline to Handle Metrics, Part II
    Webinar Recap: 2024 DORA Report: Accelerate State of DevOps
    Kubecon ‘24 recap: Patent Trolls, OTel Lessons at Scale, and Principle Platform Abstractions
    Announcing Mezmo Flow: Build a Telemetry Pipeline in 15 minutes
    Key Takeaways from the 2024 DORA Report
    Webinar Recap | Telemetry Data Management: Tales from the Trenches
    What are SLOs/SLIs/SLAs?
    Webinar Recap | Next Gen Log Management: Maximize Log Value with Telemetry Pipelines
    Creating In-Stream Alerts for Telemetry Data
    Creating Re-Usable Components for Telemetry Pipelines
    Optimizing Data for Service Management Objective Monitoring
    More Value From Your Logs: Next Generation Log Management from Mezmo
    A Day in the Life of a Mezmo SRE
    Webinar Recap: Applying a Data Engineering Approach to Telemetry Data
    Dogfooding at Mezmo: How we used telemetry pipeline to reduce data volume
    Unlocking Business Insights with Telemetry Pipelines
    Why Your Telemetry (Observability) Pipelines Need to be Responsive
    How Data Profiling Can Reduce Burnout
    Data Optimization Technique: Route Data to Specialized Processing Chains
    Data Privacy Takeaways from Gartner Security & Risk Summit
    Mastering Telemetry Pipelines: Driving Compliance and Data Optimization
    A Recap of Gartner Security and Risk Summit: GenAI, Augmented Cybersecurity, Burnout
    Why Telemetry Pipelines Should Be A Part Of Your Compliance Strategy
    Pipeline Module: Event to Metric
    Telemetry Data Compliance Module
    OpenTelemetry: The Key To Unified Telemetry Data
    Data optimization technique: convert events to metrics
    What’s New With Mezmo: In-stream Alerting
    How Mezmo Used Telemetry Pipeline to Handle Metrics
    Webinar Recap: Mastering Telemetry Pipelines - A DevOps Lifecycle Approach to Data Management
    Open-source Telemetry Pipelines: An Overview
    SRECon Recap: Product Reliability, Burn Out, and more
    Webinar Recap: How to Manage Telemetry Data with Confidence
    Webinar Recap: Myths and Realities in Telemetry Data Handling
    Using Vector to Build a Telemetry Pipeline Solution
    Managing Telemetry Data Overflow in Kubernetes with Resource Quotas and Limits
    How To Optimize Telemetry Pipelines For Better Observability and Security
    Gartner IOCS Conference Recap: Monitoring and Observing Environments with Telemetry Pipelines
    AWS re:Invent 2023 highlights: Observability at Stripe, Capital One, and McDonald’s
    Webinar Recap: Best Practices for Observability Pipelines
    Introducing Responsive Pipelines from Mezmo
    My First KubeCon - Tales of the K8’s community, DE&I, sustainability, and OTel
    Modernize Telemetry Pipeline Management with Mezmo Pipeline as Code
    How To Profile and Optimize Telemetry Data: A Deep Dive
    Kubernetes Telemetry Data Optimization in Five Steps with Mezmo
    Introducing Mezmo Edge: A Secure Approach To Telemetry Data
    Understand Kubernetes Telemetry Data Immediately With Mezmo’s Welcome Pipeline
    Unearthing Gold: Deriving Metrics from Logs with Mezmo Telemetry Pipeline
    Webinar Recap: The Single Pane of Glass Myth
    Empower Observability Engineers: Enhance Engineering With Mezmo
    Webinar Recap: How to Get More Out of Your Log Data
    Unraveling the Log Data Explosion: New Market Research Shows Trends and Challenges
    Webinar Recap: Unlocking the Full Value of Telemetry Data
    Data-Driven Decision Making: Leveraging Metrics and Logs-to-Metrics Processors
    How To Configure The Mezmo Telemetry Pipeline
    Supercharge Elasticsearch Observability With Telemetry Pipelines
    Enhancing Grafana Observability With Telemetry Pipelines
    Optimizing Your Splunk Experience with Telemetry Pipelines
    Webinar Recap: Unlocking Business Performance with Telemetry Data
    Enhancing Datadog Observability with Telemetry Pipelines
    Transforming Your Data With Telemetry Pipelines