Announcing Our Series D, Led by NightDragon

4 MIN READ
MIN READ

LogDNA is now Mezmo but the product you know and love is here to stay.

LogDNA is experiencing explosive growth in response to our work to harness observability data across a broad set of enterprise use cases. Today, this tremendous opportunity is validated by our announcement of a $50 million series D led by NightDragon, the cybersecurity investment and advisory firm. 

NightDragon’s interest was first piqued by our cloud platform that can manage the spiky nature of log data in real time and at hyperscale. As cybersecurity experts, they see that our approach addresses the needs of security vendors and practitioners and their struggle with observability. In April Dave DeWalt, NightDragon’s co-founder and managing director, joined the LogDNA board as vice chair. Since then, the extended NightDragon team have been extraordinary collaborators, bringing their expertise from the world’s largest security companies to the table. 

They see the world as we do: Improving the builder’s workflow is core to solving the observability problem and strengthening cybersecurity defenses. Especially in a world that has moved towards DevSecOps, the solution needs to make the workflow of managing observability data simple and delightful. 

The Observability Data Opportunity for Builders

At LogDNA, everything we do is about enabling the people who build solutions that shape the world. We are for the builders—the application developers, the site reliability engineers, the platform engineers, and the teams that make sure what’s being built is secure. We know that today’s builders are looking to harness machine data to quickly solve technical and business problems ranging from fixing code to finding security flaws. 

Interest in observability is at an all-time high. While there’s been lots of experimentation and investment in the space, companies still struggle with ease of use, interoperability, execution, and cost. Enterprises are not seeing the full value of their observability investment and Gartner has declared that observability is at the ‘peak of inflated expectations' in a recent Hype Cycle report.

A few months ago, I wrote about how legacy observability solutions are undermining the builder’s ability to innovate. The amount of data, the number of users, and the tools they use to access data are exploding. It’s the perfect storm, and the prevailing approach in the observability market today is to try and contain the storm in a single pane of glass. Sounds logical, but it’s making data-intensive innovation and operations more complicated, slower, and prone to risk.

Consolidating data into one (or a few) tools was a good first step in the early days, but now that open systems, cloud-native architectures, interconnected applications, and data are commonplace, a single pane of glass is far too limiting. It’s really a monolithic application for all data, which is counter-intuitive to all the benefits of an open, cloud-native approach and how builders build. It’s a choke point.

It’s time to shift the focus from single pane of glass platforms to solutions that enable the data consumers. These people must be at the center of data management. They must be able to capture the real-time value of data in motion, not just data at rest once it’s hit storage. They must be able to ingest and process data to a central point—the pipeline—and then route it to the tools where people are actually working, rather than force them to break their workflow to use a different tool. Besides, the tools are always changing.

Put simply, they need to get data from ANY source at ANY scale to ANY destination for ANY use case so that they can empower ANY data consumer.  

A Broad Set of Use Cases

Since our early days, we’ve focused our development strategy on creating a massively scalable platform for a broad set of use cases. One of our early examples of this is IBM Cloud, where we now process more than four petabytes of log data per month across a dozen global data centers. We’ve built and scaled a platform that can collect and ingest massive amounts of log data, process it, and route it to any destination for any use case. This year, we made it possible for IBM to stream data to complementary solutions like IBM QRadar and Splunk, enabling a broader set of use cases for their customers. 

What we’ve done for developers—dramatically improving productivity—is what we are creating for security professionals who are racing against time, drowning in data, and need to find the right answers quickly. We are reimagining the builder’s workflow so that each and every data consumer can take action on data in real time. We believe that this moment represents a new wave in software delivery and security, one where data consumers are truly enabled with the information that they need in the tools where they work.  

Real-time Security Events and Response

As an example of how our platform enables a broad set of use cases, LogDNA helps teams trim out excessive noise when using SIEMs to respond to security events. With best-in-class log exclusion rules, these teams can get the valuable insights they need without sifting through mountains of data. As a result, LogDNA gives organizations the ability to use their SIEM as it was intended without racking up unnecessary costs. By passing data through our observability pipeline, they can effectively separate analysis and storage, and not rely on SIEMs to process and route data. Additionally, a host of control features allow them to protect their budget by setting limits on data flow. Gone are the days where teams have to choose between data insights and staying within budget. Now teams can use LogDNA to process and route their data and their SIEM to take action on it.

Enabling Service Providers

We see service providers—especially managed security service providers (MSSPs) and managed detection and response providers (MDRs)—as a first wave of categories that will take advantage of our observability pipeline in order to differentiate their security capabilities to enterprises. Enterprises have dozens of security tools, but few choices in how to leverage observability data across them, regardless of the tools they've chosen. Meanwhile, security professionals are drowning in alerts and red lights, and have to sift through data to do everything from stopping threats to fixing poor configurations. There is a desire to get ahead of security issues by shifting left with DevSecOps, to create sound security practices proactively in business and technology operations, as opposed to constantly chasing bad security postures.

We’re already bringing on design partners in the MSSP category, one of which is tasking us with simplifying their data collection, processing, and routing. In the new year, we’ll build primitives to enable their logic on top of our pipeline so that they can take action on their data in motion. 

Fueling the Observability Data Opportunity

There is clear enterprise-driven demand to make observability data work better for the vast array of data consumers. Now it’s time for us to scale to meet demand. This investment allows us to accelerate bringing our full solution to market, focusing on builders and addressing the needs of service providers and enterprises that strive for innovation. 

Today, we take our next leap forward. To meet our audacious goals, we are expanding our team, building new technical integrations, establishing new strategic partnerships, and supporting a variety of clouds and platforms. This is a pivotal time in enabling builders, and LogDNA is at the leading edge of this moment.

I’m excited to share our next steps and progress in realizing our vision with the community. 

Table of Contents

    Share Article

    RSS Feed

    Next blog post
    You're viewing our latest blog post.
    Previous blog post
    You're viewing our oldest blog post.
    Mezmo + Catchpoint deliver observability SREs can rely on
    Mezmo’s AI-powered Site Reliability Engineering (SRE) agent for Root Cause Analysis (RCA)
    What is Active Telemetry
    Launching an agentic SRE for root cause analysis
    Paving the way for a new era: Mezmo's Active Telemetry
    The Answer to SRE Agent Failures: Context Engineering
    Empowering an MCP server with a telemetry pipeline
    The Debugging Bottleneck: A Manual Log-Sifting Expedition
    The Smartest Member of Your Developer Ecosystem: Introducing the Mezmo MCP Server
    Your New AI Assistant for a Smarter Workflow
    The Observability Problem Isn't Data Volume Anymore—It's Context
    Beyond the Pipeline: Data Isn't Oil, It's Power.
    The Platform Engineer's Playbook: Mastering OpenTelemetry & Compliance with Mezmo and Dynatrace
    From Alert to Answer in Seconds: Accelerating Incident Response in Dynatrace
    Taming Your Dynatrace Bill: How to Cut Observability Costs, Not Visibility
    Architecting for Value: A Playbook for Sustainable Observability
    How to Cut Observability Costs with Synthetic Monitoring and Responsive Pipelines
    Unlock Deeper Insights: Introducing GitLab Event Integration with Mezmo
    Introducing the New Mezmo Product Homepage
    The Inconvenient Truth About AI Ethics in Observability
    Observability's Moneyball Moment: How AI Is Changing the Game (Not Ending It)
    Do you Grok It?
    Top Five Reasons Telemetry Pipelines Should Be on Every Engineer’s Radar
    Is It a Cup or a Pot? Helping You Pinpoint the Problem—and Sleep Through the Night
    Smarter Telemetry Pipelines: The Key to Cutting Datadog Costs and Observability Chaos
    Why Datadog Falls Short for Log Management and What to Do Instead
    Telemetry for Modern Apps: Reducing MTTR with Smarter Signals
    Transforming Observability: Simpler, Smarter, and More Affordable Data Control
    Datadog: The Good, The Bad, The Costly
    Mezmo Recognized with 25 G2 Awards for Spring 2025
    Reducing Telemetry Toil with Rapid Pipelining
    Cut Costs, Not Insights:   A Practical Guide to Telemetry Data Optimization
    Webinar Recap: Telemetry Pipeline 101
    Petabyte Scale, Gigabyte Costs: Mezmo’s Evolution from ElasticSearch to Quickwit
    2024 Recap - Highlights of Mezmo’s product enhancements
    My Favorite Observability and DevOps Articles of 2024
    AWS re:Invent ‘24: Generative AI Observability, Platform Engineering, and 99.9995% Availability
    From Gartner IOCS 2024 Conference: AI, Observability Data, and Telemetry Pipelines
    Our team’s learnings from Kubecon: Use Exemplars, Configuring OTel, and OTTL cookbook
    How Mezmo Uses a Telemetry Pipeline to Handle Metrics, Part II
    Webinar Recap: 2024 DORA Report: Accelerate State of DevOps
    Kubecon ‘24 recap: Patent Trolls, OTel Lessons at Scale, and Principle Platform Abstractions
    Announcing Mezmo Flow: Build a Telemetry Pipeline in 15 minutes
    Key Takeaways from the 2024 DORA Report
    Webinar Recap | Telemetry Data Management: Tales from the Trenches
    What are SLOs/SLIs/SLAs?
    Webinar Recap | Next Gen Log Management: Maximize Log Value with Telemetry Pipelines
    Creating In-Stream Alerts for Telemetry Data
    Creating Re-Usable Components for Telemetry Pipelines
    Optimizing Data for Service Management Objective Monitoring
    More Value From Your Logs: Next Generation Log Management from Mezmo
    A Day in the Life of a Mezmo SRE
    Webinar Recap: Applying a Data Engineering Approach to Telemetry Data
    Dogfooding at Mezmo: How we used telemetry pipeline to reduce data volume
    Unlocking Business Insights with Telemetry Pipelines
    Why Your Telemetry (Observability) Pipelines Need to be Responsive
    How Data Profiling Can Reduce Burnout
    Data Optimization Technique: Route Data to Specialized Processing Chains
    Data Privacy Takeaways from Gartner Security & Risk Summit
    Mastering Telemetry Pipelines: Driving Compliance and Data Optimization
    A Recap of Gartner Security and Risk Summit: GenAI, Augmented Cybersecurity, Burnout
    Why Telemetry Pipelines Should Be A Part Of Your Compliance Strategy
    Pipeline Module: Event to Metric
    Telemetry Data Compliance Module
    OpenTelemetry: The Key To Unified Telemetry Data
    Data optimization technique: convert events to metrics
    What’s New With Mezmo: In-stream Alerting
    How Mezmo Used Telemetry Pipeline to Handle Metrics
    Webinar Recap: Mastering Telemetry Pipelines - A DevOps Lifecycle Approach to Data Management
    Open-source Telemetry Pipelines: An Overview
    SRECon Recap: Product Reliability, Burn Out, and more
    Webinar Recap: How to Manage Telemetry Data with Confidence
    Webinar Recap: Myths and Realities in Telemetry Data Handling
    Using Vector to Build a Telemetry Pipeline Solution
    Managing Telemetry Data Overflow in Kubernetes with Resource Quotas and Limits
    How To Optimize Telemetry Pipelines For Better Observability and Security
    Gartner IOCS Conference Recap: Monitoring and Observing Environments with Telemetry Pipelines
    AWS re:Invent 2023 highlights: Observability at Stripe, Capital One, and McDonald’s
    Webinar Recap: Best Practices for Observability Pipelines
    Introducing Responsive Pipelines from Mezmo
    My First KubeCon - Tales of the K8’s community, DE&I, sustainability, and OTel
    Modernize Telemetry Pipeline Management with Mezmo Pipeline as Code
    How To Profile and Optimize Telemetry Data: A Deep Dive
    Kubernetes Telemetry Data Optimization in Five Steps with Mezmo
    Introducing Mezmo Edge: A Secure Approach To Telemetry Data
    Understand Kubernetes Telemetry Data Immediately With Mezmo’s Welcome Pipeline
    Unearthing Gold: Deriving Metrics from Logs with Mezmo Telemetry Pipeline
    Webinar Recap: The Single Pane of Glass Myth
    Empower Observability Engineers: Enhance Engineering With Mezmo
    Webinar Recap: How to Get More Out of Your Log Data
    Unraveling the Log Data Explosion: New Market Research Shows Trends and Challenges
    Webinar Recap: Unlocking the Full Value of Telemetry Data
    Data-Driven Decision Making: Leveraging Metrics and Logs-to-Metrics Processors
    How To Configure The Mezmo Telemetry Pipeline
    Supercharge Elasticsearch Observability With Telemetry Pipelines
    Enhancing Grafana Observability With Telemetry Pipelines
    Optimizing Your Splunk Experience with Telemetry Pipelines
    Webinar Recap: Unlocking Business Performance with Telemetry Data
    Enhancing Datadog Observability with Telemetry Pipelines
    Transforming Your Data With Telemetry Pipelines