The Observability Data Opportunity

4 MIN READ
MIN READ

LogDNA is now Mezmo but the product you know and live is here to stay.

Observability data, and especially log data, is immensely valuable for modern business. Making the right decision—from monitoring the bits and bytes of application code to the actions in the security incident response center—requires the right people to generate insights from data as fast as possible. 

Alongside the rise of machine data is a widespread cultural and operational change across enterprises to DevOps (and DevSecOps). It’s a working style that brings together different teams and shifts the development approach left, empowering developers with greater accountability over the operations and security of these applications. With a greater DevSecOps orientation, teams are accelerating with autonomy, aligned around innovation, and expected to have greater access to data. 

The rapid growth of Mezmo, formerly known as LogDNA, illustrates just how valuable data is and our unique ability to remove the friction of how autonomous teams use it for application troubleshooting and debugging. With Mezmo's logging platform and software solution firmly embedded in enterprise DevOps teams, our customers are turning to us to solve a deeper level of business and technology issues, including the need to:

  • Centralize data from any source in a cost-effective manner
  • Leverage data to solve more problems across the company
  • Get data to any destination or data consumer in the company in real time

Understanding the Observability Data Mess 

Log data is the largest, and arguably most important, observability data. It underpins all applications and systems. Yet, despite the perceived value of all of this data and the hype around observability, the vast majority of observability data remains dark. It’s wasted and expensive.

What’s keeping this data in the dark? Our customers tell us that the scale, complexity, wide variety of data consumers, and runaway cost make it impossible to get value out of all of their machine data. A number of technical and organizational challenges are also holding them back, including:

  • Deep-rooted data and organizational silos that require distinct data workflows to enable the users of those tools. In many companies, functions like application performance, security, and real-time decision making are owned by specific teams or titles. Even the tools that they use often exist in silos (for example, only giving log management tooling to IT operations folks), which makes it near impossible to get access to the information you need to understand the functions of another team. This creates a massive barrier to enterprise-wide sharing.  
  • The shift to cloud-native and hybrid cloud environments delivers massive volumes of data, as well as tremendous diversity, erratic spikes, and complexity. All of these factors combine to pound enterprises with cost, compliance, and other data management and operational headaches. In addition, legacy big data approaches aren’t suited for this type of structured and unstructured data which needs to be available in real time.
  • Heavy-handed single-pane-of-glass approaches are not sufficient to transform and route observability data to the appropriate storage destinations in a timely and cost-effective way. They cannot be everything to everybody, and results in a watered-down experience, leaving specialized teams frustrated. Machine data streaming requires specialization apart from one business or operational function. This dynamic continues to reinforce technology and organizational silos, and bottles up valuable data to rot in storage. 

How Mezmo Uniquely Unlocks the Value of Machine Data

As the amount of log data grows, the value of it hasn’t historically kept pace. We call this the machine data cost-curve problem. It’s the problem we intend to tackle head-on and solve. 


The value of log data in addressing the multitude of enterprise needs—from developer productivity to cybersecurity—requires the ability to ingest data in a way that is agnostic to source; the ability to process and route that data, regardless of the destination; and the ability to store and analyze that data in a way that meets the requirements of each of its consumers.

Mezmo has always believed that optimizing the ingestion pipeline was essential for solving the industry need for true observability. Our innovations in this area form a powerful foundation to meet today’s critical scale, storage, and routing requirements. 

  • Mezmo is cloud native and built for scale. This has made it possible for Mezmo to ingest, process, route, and analyze petabytes of log data for customers, and store that data at an affordable rate. Granular controls also help control costs (from those inevitable erratic spikes) and meet compliance demands
  • We’ve built a platform that solves the challenge of scaling to meet the spiky demands of machine data, a need shared by enterprises that are migrating to the cloud and undergoing a digital transformation. 
  • By ingesting, processing, streaming, and storing machine data at hyperscale, Mezmo gives enterprises the freedom to deploy that data to the application and human specialists who best know how to unlock its value. Data isn’t bottled up in a single pane of glass, but unleashed for use across all possible panes of glass. 

Already, these platform strengths allow some of our customers to leverage data from Mezmo in new machine data pipeline use cases. For example:

  • A major U.S. airline deployed Mezmo to give developers and SREs across 27 different agile product teams access to their log data. These teams leverage data from many microservice and monolithic applications across multiple clouds and on-premise environments with Mezmo and stream it into other observability tools for a diverse set of use cases. 
  • A major cloud provider uses Mezmo as their embedded provider for both logging and monitoring. Our SaaS and private cloud solutions are deployed across 11 global data centers and empower both internal teams, and enterprises using cloud services to fully understand their application and system performance. And now, 100 accounts on their cloud platform are streaming logs from Mezmo to their SIEM.
  • One of the world’s largest e-commerce companies uses Mezmo to capture IoT data from their warehouse robots. Centralizing their logs into a single platform allows them to troubleshoot issues with their robots as they happen in the field. Seventy five percent of employees in the robotics division of this company access their logs, which is made possible by having a tool that’s optimized for a DevOps culture. 



Join the New Mezmo Streaming Early-access Program

You can join these companies to overcome machine data management headaches and unlock more value from your log data across the enterprise. 

Today, we announced the early-access beta of a powerful new platform feature: Mezmo Streaming

Mezmo Streaming lets enterprises ingest all of their log data to a single platform, and then route it for any enterprise use case. This new feature takes full advantage of Mezmo's unparalleled ability to quickly ingest massive amounts of structured and unstructured data, normalize it, and have granular control over storage to control costs and meet compliance needs. It’s ideally suited for use cases in cybersecurity and enterprise-level application delivery where more data can deliver dramatic outcomes if it’s accessible in real time. 

For too long, enterprises have had to make difficult choices around how to use all of their machine data while controlling skyrocketing costs. We will continue to build a comprehensive platform that enables anyone to ingest, process, route, analyze, and store all of their log data in a way that makes sense for them. 

I’m excited to continue sharing the progress we are making in this space and looking forward to adding even more value that enables you to build capabilities on top of your data in motion. 

Table of Contents

    Share Article

    RSS Feed

    Next blog post
    You're viewing our latest blog post.
    Previous blog post
    You're viewing our oldest blog post.
    What is Active Telemetry
    Launching an agentic SRE for root cause analysis
    Paving the way for a new era: Mezmo's Active Telemetry
    The Answer to SRE Agent Failures: Context Engineering
    Empowering an MCP server with a telemetry pipeline
    The Debugging Bottleneck: A Manual Log-Sifting Expedition
    The Smartest Member of Your Developer Ecosystem: Introducing the Mezmo MCP Server
    Your New AI Assistant for a Smarter Workflow
    The Observability Problem Isn't Data Volume Anymore—It's Context
    Beyond the Pipeline: Data Isn't Oil, It's Power.
    The Platform Engineer's Playbook: Mastering OpenTelemetry & Compliance with Mezmo and Dynatrace
    From Alert to Answer in Seconds: Accelerating Incident Response in Dynatrace
    Taming Your Dynatrace Bill: How to Cut Observability Costs, Not Visibility
    Architecting for Value: A Playbook for Sustainable Observability
    How to Cut Observability Costs with Synthetic Monitoring and Responsive Pipelines
    Unlock Deeper Insights: Introducing GitLab Event Integration with Mezmo
    Introducing the New Mezmo Product Homepage
    The Inconvenient Truth About AI Ethics in Observability
    Observability's Moneyball Moment: How AI Is Changing the Game (Not Ending It)
    Do you Grok It?
    Top Five Reasons Telemetry Pipelines Should Be on Every Engineer’s Radar
    Is It a Cup or a Pot? Helping You Pinpoint the Problem—and Sleep Through the Night
    Smarter Telemetry Pipelines: The Key to Cutting Datadog Costs and Observability Chaos
    Why Datadog Falls Short for Log Management and What to Do Instead
    Telemetry for Modern Apps: Reducing MTTR with Smarter Signals
    Transforming Observability: Simpler, Smarter, and More Affordable Data Control
    Datadog: The Good, The Bad, The Costly
    Mezmo Recognized with 25 G2 Awards for Spring 2025
    Reducing Telemetry Toil with Rapid Pipelining
    Cut Costs, Not Insights:   A Practical Guide to Telemetry Data Optimization
    Webinar Recap: Telemetry Pipeline 101
    Petabyte Scale, Gigabyte Costs: Mezmo’s Evolution from ElasticSearch to Quickwit
    2024 Recap - Highlights of Mezmo’s product enhancements
    My Favorite Observability and DevOps Articles of 2024
    AWS re:Invent ‘24: Generative AI Observability, Platform Engineering, and 99.9995% Availability
    From Gartner IOCS 2024 Conference: AI, Observability Data, and Telemetry Pipelines
    Our team’s learnings from Kubecon: Use Exemplars, Configuring OTel, and OTTL cookbook
    How Mezmo Uses a Telemetry Pipeline to Handle Metrics, Part II
    Webinar Recap: 2024 DORA Report: Accelerate State of DevOps
    Kubecon ‘24 recap: Patent Trolls, OTel Lessons at Scale, and Principle Platform Abstractions
    Announcing Mezmo Flow: Build a Telemetry Pipeline in 15 minutes
    Key Takeaways from the 2024 DORA Report
    Webinar Recap | Telemetry Data Management: Tales from the Trenches
    What are SLOs/SLIs/SLAs?
    Webinar Recap | Next Gen Log Management: Maximize Log Value with Telemetry Pipelines
    Creating In-Stream Alerts for Telemetry Data
    Creating Re-Usable Components for Telemetry Pipelines
    Optimizing Data for Service Management Objective Monitoring
    More Value From Your Logs: Next Generation Log Management from Mezmo
    A Day in the Life of a Mezmo SRE
    Webinar Recap: Applying a Data Engineering Approach to Telemetry Data
    Dogfooding at Mezmo: How we used telemetry pipeline to reduce data volume
    Unlocking Business Insights with Telemetry Pipelines
    Why Your Telemetry (Observability) Pipelines Need to be Responsive
    How Data Profiling Can Reduce Burnout
    Data Optimization Technique: Route Data to Specialized Processing Chains
    Data Privacy Takeaways from Gartner Security & Risk Summit
    Mastering Telemetry Pipelines: Driving Compliance and Data Optimization
    A Recap of Gartner Security and Risk Summit: GenAI, Augmented Cybersecurity, Burnout
    Why Telemetry Pipelines Should Be A Part Of Your Compliance Strategy
    Pipeline Module: Event to Metric
    Telemetry Data Compliance Module
    OpenTelemetry: The Key To Unified Telemetry Data
    Data optimization technique: convert events to metrics
    What’s New With Mezmo: In-stream Alerting
    How Mezmo Used Telemetry Pipeline to Handle Metrics
    Webinar Recap: Mastering Telemetry Pipelines - A DevOps Lifecycle Approach to Data Management
    Open-source Telemetry Pipelines: An Overview
    SRECon Recap: Product Reliability, Burn Out, and more
    Webinar Recap: How to Manage Telemetry Data with Confidence
    Webinar Recap: Myths and Realities in Telemetry Data Handling
    Using Vector to Build a Telemetry Pipeline Solution
    Managing Telemetry Data Overflow in Kubernetes with Resource Quotas and Limits
    How To Optimize Telemetry Pipelines For Better Observability and Security
    Gartner IOCS Conference Recap: Monitoring and Observing Environments with Telemetry Pipelines
    AWS re:Invent 2023 highlights: Observability at Stripe, Capital One, and McDonald’s
    Webinar Recap: Best Practices for Observability Pipelines
    Introducing Responsive Pipelines from Mezmo
    My First KubeCon - Tales of the K8’s community, DE&I, sustainability, and OTel
    Modernize Telemetry Pipeline Management with Mezmo Pipeline as Code
    How To Profile and Optimize Telemetry Data: A Deep Dive
    Kubernetes Telemetry Data Optimization in Five Steps with Mezmo
    Introducing Mezmo Edge: A Secure Approach To Telemetry Data
    Understand Kubernetes Telemetry Data Immediately With Mezmo’s Welcome Pipeline
    Unearthing Gold: Deriving Metrics from Logs with Mezmo Telemetry Pipeline
    Webinar Recap: The Single Pane of Glass Myth
    Empower Observability Engineers: Enhance Engineering With Mezmo
    Webinar Recap: How to Get More Out of Your Log Data
    Unraveling the Log Data Explosion: New Market Research Shows Trends and Challenges
    Webinar Recap: Unlocking the Full Value of Telemetry Data
    Data-Driven Decision Making: Leveraging Metrics and Logs-to-Metrics Processors
    How To Configure The Mezmo Telemetry Pipeline
    Supercharge Elasticsearch Observability With Telemetry Pipelines
    Enhancing Grafana Observability With Telemetry Pipelines
    Optimizing Your Splunk Experience with Telemetry Pipelines
    Webinar Recap: Unlocking Business Performance with Telemetry Data
    Enhancing Datadog Observability with Telemetry Pipelines
    Transforming Your Data With Telemetry Pipelines
    6 Steps to Implementing a Telemetry Pipeline
    Webinar Recap: Taming Data Complexity at Scale