Announcing Our Series D, Led by NightDragon

4 MIN READ
MIN READ
TABLE OF CONTENTS
    4 MIN READ
    MIN READ

    LogDNA is now Mezmo but the product you know and love is here to stay.

    LogDNA is experiencing explosive growth in response to our work to harness observability data across a broad set of enterprise use cases. Today, this tremendous opportunity is validated by our announcement of a $50 million series D led by NightDragon, the cybersecurity investment and advisory firm. 

    NightDragon’s interest was first piqued by our cloud platform that can manage the spiky nature of log data in real time and at hyperscale. As cybersecurity experts, they see that our approach addresses the needs of security vendors and practitioners and their struggle with observability. In April Dave DeWalt, NightDragon’s co-founder and managing director, joined the LogDNA board as vice chair. Since then, the extended NightDragon team have been extraordinary collaborators, bringing their expertise from the world’s largest security companies to the table. 

    They see the world as we do: Improving the builder’s workflow is core to solving the observability problem and strengthening cybersecurity defenses. Especially in a world that has moved towards DevSecOps, the solution needs to make the workflow of managing observability data simple and delightful. 

    The Observability Data Opportunity for Builders

    At LogDNA, everything we do is about enabling the people who build solutions that shape the world. We are for the builders—the application developers, the site reliability engineers, the platform engineers, and the teams that make sure what’s being built is secure. We know that today’s builders are looking to harness machine data to quickly solve technical and business problems ranging from fixing code to finding security flaws. 

    Interest in observability is at an all-time high. While there’s been lots of experimentation and investment in the space, companies still struggle with ease of use, interoperability, execution, and cost. Enterprises are not seeing the full value of their observability investment and Gartner has declared that observability is at the ‘peak of inflated expectations' in a recent Hype Cycle report.

    A few months ago, I wrote about how legacy observability solutions are undermining the builder’s ability to innovate. The amount of data, the number of users, and the tools they use to access data are exploding. It’s the perfect storm, and the prevailing approach in the observability market today is to try and contain the storm in a single pane of glass. Sounds logical, but it’s making data-intensive innovation and operations more complicated, slower, and prone to risk.

    Consolidating data into one (or a few) tools was a good first step in the early days, but now that open systems, cloud-native architectures, interconnected applications, and data are commonplace, a single pane of glass is far too limiting. It’s really a monolithic application for all data, which is counter-intuitive to all the benefits of an open, cloud-native approach and how builders build. It’s a choke point.

    It’s time to shift the focus from single pane of glass platforms to solutions that enable the data consumers. These people must be at the center of data management. They must be able to capture the real-time value of data in motion, not just data at rest once it’s hit storage. They must be able to ingest and process data to a central point—the pipeline—and then route it to the tools where people are actually working, rather than force them to break their workflow to use a different tool. Besides, the tools are always changing.

    Put simply, they need to get data from ANY source at ANY scale to ANY destination for ANY use case so that they can empower ANY data consumer.  

    A Broad Set of Use Cases

    Since our early days, we’ve focused our development strategy on creating a massively scalable platform for a broad set of use cases. One of our early examples of this is IBM Cloud, where we now process more than four petabytes of log data per month across a dozen global data centers. We’ve built and scaled a platform that can collect and ingest massive amounts of log data, process it, and route it to any destination for any use case. This year, we made it possible for IBM to stream data to complementary solutions like IBM QRadar and Splunk, enabling a broader set of use cases for their customers. 

    What we’ve done for developers—dramatically improving productivity—is what we are creating for security professionals who are racing against time, drowning in data, and need to find the right answers quickly. We are reimagining the builder’s workflow so that each and every data consumer can take action on data in real time. We believe that this moment represents a new wave in software delivery and security, one where data consumers are truly enabled with the information that they need in the tools where they work.  

    Real-time Security Events and Response

    As an example of how our platform enables a broad set of use cases, LogDNA helps teams trim out excessive noise when using SIEMs to respond to security events. With best-in-class log exclusion rules, these teams can get the valuable insights they need without sifting through mountains of data. As a result, LogDNA gives organizations the ability to use their SIEM as it was intended without racking up unnecessary costs. By passing data through our observability pipeline, they can effectively separate analysis and storage, and not rely on SIEMs to process and route data. Additionally, a host of control features allow them to protect their budget by setting limits on data flow. Gone are the days where teams have to choose between data insights and staying within budget. Now teams can use LogDNA to process and route their data and their SIEM to take action on it.

    Enabling Service Providers

    We see service providers—especially managed security service providers (MSSPs) and managed detection and response providers (MDRs)—as a first wave of categories that will take advantage of our observability pipeline in order to differentiate their security capabilities to enterprises. Enterprises have dozens of security tools, but few choices in how to leverage observability data across them, regardless of the tools they've chosen. Meanwhile, security professionals are drowning in alerts and red lights, and have to sift through data to do everything from stopping threats to fixing poor configurations. There is a desire to get ahead of security issues by shifting left with DevSecOps, to create sound security practices proactively in business and technology operations, as opposed to constantly chasing bad security postures.

    We’re already bringing on design partners in the MSSP category, one of which is tasking us with simplifying their data collection, processing, and routing. In the new year, we’ll build primitives to enable their logic on top of our pipeline so that they can take action on their data in motion. 

    Fueling the Observability Data Opportunity

    There is clear enterprise-driven demand to make observability data work better for the vast array of data consumers. Now it’s time for us to scale to meet demand. This investment allows us to accelerate bringing our full solution to market, focusing on builders and addressing the needs of service providers and enterprises that strive for innovation. 

    Today, we take our next leap forward. To meet our audacious goals, we are expanding our team, building new technical integrations, establishing new strategic partnerships, and supporting a variety of clouds and platforms. This is a pivotal time in enabling builders, and LogDNA is at the leading edge of this moment.

    I’m excited to share our next steps and progress in realizing our vision with the community. 

    false
    false
    Tucker Callaway

    12.6.21

    Tucker Callaway is the Chief Executive Officer of Mezmo. Tucker has more than 20 years of experience selling enterprise software, with an emphasis on developer and DevOps tools. Prior to Mezmo, Tucker served as CRO of Sauce Labs and Vice President of Worldwide Sales at Chef. He holds a BA in Computer Science from the University of California, Berkeley.

    SHARE ARTICLE

    RSS FEED