Tucker Callaway on the State of the Observability Market

4 MIN READ
MIN READ

LogDNA is now Mezmo but the insights you know and love are here to stay.

Listen here and read along below.

Liesse: Tucker Callaway is the CEO of LogDNA. He has more than 20 years of experience in enterprise software with an emphasis on developer and DevOps tools. Tucker drives innovation, experimentation, and a culture of collaboration at LogDNA, three ingredients that are essential for the type of growth that we've experienced over the last few years.

LogDNA is a comprehensive platform to control all of your log data. It enables teams to ingest and route massive amounts of data from any source to any source. This capability fuels enterprise level application development and delivery, security, and compliance use cases where the ability to use log data in real time is mission critical.

I'm Liesse from LogDNA. Today, Tucker and I are going to talk about how LogDNA sees the market, including the need to extract more value from observability data. 

Hi Tucker. It's great to be having this conversation with you. To set the foundation for this discussion, can you tell us what market categories we're going to talk about today?

Tucker: Hi Liesse, thanks for having me on—always fun. Okay, so the market categories we're going to talk about today are log management, which is of course is a subset of observability, and cybersecurity. 

Liesse: Awesome. Most people associate LogDNA with log management, so let's start there. How do you see the log management category changing over the next few years? 

Tucker: It's a fascinating question, obviously one that I spend a lot of time on—I guess we both do. I believe that log management, as we know it today, is going to be very much disrupted. In fact, in many ways it has been disrupted. Certainly I understand the draw to consolidated observability platforms that include log management. But, what I find to be really interesting is, as log management as a traditional discipline starts to be consumed more and more by observability, the log data is actually going in a different direction. Right? Whereas we see kind of more of a simplification and consolidation of log management as a discipline, the data of logs is increasingly being consumed by more use cases and therefore by more consumers, which means that there are more requirements and capabilities that have to be applied, or more disciplines that need to be applied to the log data as it's in motion. And it needs to get to more destinations than ever before. 

Liesse: Yeah, absolutely. Can you talk a little bit about pipeline capabilities and what you think that means in relation to log data? 

Tucker: Yeah. I mean, that's the foundation of it really. The pipeline is what I believe log data is about today. When I say pipeline, I mean the ability to route, store, process, enhance, redact, exclude—all those things. All the different operations you want to take on this data in motion, that's going to just continue to grow because of these use cases and because of the varied level of consumers that are going to continue to need to engage with log data. 

Liesse: How do you see this influence the cybersecurity and observability categories? 

Tucker: So these categories are just continuing to grow the relevance of both observability and cybersecurity. Some might argue that they will start to converge with each other. One of the things that underpins all of this is the amount of data that is common between the two.

It's really interesting, I mean we see this again—massive growth in data related to this with the growth in servers, microservices, cloud migrations, like all the things that are happening is just creating more and more data. That, of course, creates more frustration for customers trying to manage that data effectively. And you combine that with the threat landscape that exists today. The cybersecurity challenges are just on the rise. And so, trying to find the signal from that noise and trying to take action on this amount of data, with the modern systems and applications produced today, is a fundamental challenge for our customers. That’s having a huge influence on the relevance of log data to cybersecurity and observability use cases. 

Liesse: Absolutely. And we've seen that some of our most successful customers operate with a DevOps mindset. What do you think the impact of that is on the market trends that you mentioned? 

Tucker: Yeah, so it's not just DevOps, of course, it’s DevSecOps now.

So we have three audiences in the mix which, as you bring these different teams together, trying to empower them, getting them to shift left on operational disciplines, it creates a really interesting dynamic change. So, fundamentally it was DevOps before, so you kind of had two teams trying to collaborate. Moving to DevSecOps is now three teams trying to collaborate. And so, it's kind of like in my personal world, having two kids is a lot more than having one kid. 

Liesse: and having three. 

Tucker: I dunno about three, but, if you're the SRE team or the Ops team trying to work with one other team, that's hard enough. And now you’ve got a third team in the mix and it feels like an explosion. I think that's going to change that dynamic because it very much changes the number of consumers of the data. And so, that accelerates the need to work with autonomy, to make sure people have to be aligned on the innovation that's happening. And out of all that, they expect greater access to data. They need to access the source data, they need data in the tools that they use, and those tools allow them to take action. So it's not good enough, in this kind of new dynamic, to just have a single pane of glass. But you have to be able to get people to a point where they can operate in the world that they live in, not switching context into a single pane of glass. 

Liesse: Yeah, absolutely. And I've heard you talk about a flywheel effect that this has created where DevSecOps is at the center and then branching out from there are innovation, data, and action. Can you talk us through each of those components of the flywheel? 

Tucker: Yeah. So, it’s interesting in that it's somewhat basic and something we all take for granted, but it's creating a dynamic that we do call, “the flywheel.” So the innovation in terms of the way not only the ways we deliver technology, but the methodologies we use to deliver them is changing rapidly and it's creating a much greater surface area. It's creating a much larger attack vector too for cybersecurity use cases. What I mean by that is, you have microservices that create a proliferation of location and number of services and then you put DevOps—or agile DevOps-type methodologies—for delivering on top of that where people are delivering continuously. All that together is creating a huge amount of data now. And when you have that data, it becomes harder to take action. Once you take action on it, then you have the ability to learn about threats. You have the ability to learn about performance. You have the ability to learn about your customer, which then drives back into the innovation curve. And so you're kind of in a virtuous cycle there. 

What is the big challenge that's being addressed today? What we're trying to address today is that you have the DevSecOps teams that just sit in the middle of that, trying to handle this cycle of innovation and data and how do I solve the cost curve problem that sits within action—or it sits within the data to take action—and keep the flywheel going?

Liesse: Because you just mentioned the cost curve problem, I'll let you explain it to our listeners. What is the cost curve problem?

Tucker: Yeah, sorry that that's something we refer to internally all the time, but it's basically the idea that as the amount of log data grows, the value that people drive from it is not growing linearly with that growth in data. And data, I'm using in this case, as almost a proxy for cost. So the cost of this data is growing, at least linearly, probably exponentially, but the value people derive out of it isn’t. And that's the problem that we're here to solve, especially as it relates to data in motion. 

Liesse: Absolutely. Thank you for that explanation. So let's dive a little bit deeper into the cybersecurity use case. Because obviously that's really interesting and very relevant for a lot of people who are listening today. So, what is the signal that you’re seeing that observability data is having a bigger and bigger impact on cybersecurity?

Tucker: Yeah, so at the top level within the cybersecurity category, we see this huge rise and rush to people to address XDR solutions. So XDR solutions combined the traditional SIEM constructs with more of endpoint detection and response constructs to bring together a more composite view of the threat environment across both endpoints and servers.

What we see when that happens is we see now the cybersecurity providers are forced to deal with the scale and volume of log files and the spikiness—the unstructured nature—of the data, all the beautiful things that come with logs that are hard to deal with. And so, that creates this huge need when it comes to facilitating cybersecurity to handle log files effectively. So that's been a big driver. Of course, if you look out at the market, you'll see a number of the larger players in the security space making investments in observability, you'll see a lot of the larger observability players making investments in security. And so, we see this convergence happening. If you look at the TAM for cybersecurity, analytics, and observability, we just see continued growth. So we see the category just continuing to accelerate over time. 

Liesse: Absolutely. So now that we've laid the foundation for where the market is, I would love to hear from your perspective what the market needs right now. 

Tucker: So we break down the needs in this space into three fundamental categories. The first is collect and ingest. The second is process and route. And the third is store and analyze. When we talk about collect and ingest, we think about the agent that ingests the data. This is the thing that happens on the client side. I believe that what is required in collect and ingest is a vendor agnostic capability, potentially open source, that allows for a ubiquitous way to collect this kind of machine data. On the store and analyze side we see tremendous opportunity there. Our traditional solution is in the store and analyze part of log management. But increasingly over time, I believe that the cloud data warehouse providers will continue to invest in, and provide, offerings there. And when I think about that, I think about data at rest. 

Where I think the real opportunity is, that's missing in the market today, is this concept of process and route. So enterprises require the ability to process and route this data, regardless of the source and regardless of the destination. But, they also have the need to take action on that data as it goes by—as it's in motion. And so, that kind of pipeline that we referred to is where we're focused and what we're excited to bring to market right now. 

Liesse: Yeah. Many of the capabilities that you just mentioned are foundational to the LogDNA product—we've been working on them since day one. And right now I know we're in the process of building out a lot of new functionality that will enhance our ability to address these needs, specifically in the process and route category, which is so important. So I would love for you to tell us a little bit about the early access program that LogDNA just announced.

Tucker: Yeah. Great. So you know, as you mentioned, we've always been focused on the data pipeline space. That's actually our differentiator. What makes us unique is our pipeline capabilities and our ability to so quickly ingest, process, route, analyze, and enhance all the data that comes through. That, of course, has traditionally gone into our systems. What we're excited about with the streaming capability that we've released—we've released on IBM where we have a hundred customers using it today—and we are announcing early access for our own environment, is a streaming capability that allows us to ingest this data—massive amounts of unstructured data—and allows us to take it from whatever location it is sourced from to whatever location it needs to get to. And so, that doesn't necessarily mean that it has to go into the traditional LogDNA anymore. What's most important is that we get it to the right destination for the right consumers, as we talked about. 

Liesse: Yeah absolutely. And this is interesting because it's ideally suited for use cases in cybersecurity and enterprise level application delivery, where more data can deliver dramatic outcomes if it's accessible in real time. And that, I think, is something that we're hearing from the market that they really need and don't have a good solution for today.

Tucker: It's the real-time nature of it that's the challenge, right? It’s not that providers out there can't ingest data, it's more like can they ingest it in the spiky ways like log data, as we know, and have learned over the last four to five years can increase a hundred X at a moment's notice, right? You have a sale or some external event hits and all of a sudden the traffic that hits an application can massively increase, and yet we still need to be able to process it in real time. So being cloud native, having the ability to just scale, you know, work at cloud scale, work at hyperscale, is very important for us. And that's what makes us uniquely suited to actually handle this type of data to enable those cybersecurity use cases. 

Liesse: Yeah. Awesome. I know people are super excited about that. So for anyone who would like to learn more about this early access program, you can visit go.logdna.com/streaming-early-access.

Okay, Tucker Callaway, CEO of LogDNA, thanks so much for a great conversation about the observability data market. I'm Liesse Jones, thanks for listening!

If you'd like to learn more about LogDNA, visit us at logdna.com.

Table of Contents

    Share Article

    RSS Feed

    Next blog post
    You're viewing our latest blog post.
    Previous blog post
    You're viewing our oldest blog post.
    The Observability Stack is Collapsing: Why Context-First Data is the Only Path to AI-Powered Root Cause Analysis
    Mezmo + Catchpoint deliver observability SREs can rely on
    Mezmo’s AI-powered Site Reliability Engineering (SRE) agent for Root Cause Analysis (RCA)
    What is Active Telemetry
    Launching an agentic SRE for root cause analysis
    Paving the way for a new era: Mezmo's Active Telemetry
    The Answer to SRE Agent Failures: Context Engineering
    Empowering an MCP server with a telemetry pipeline
    The Debugging Bottleneck: A Manual Log-Sifting Expedition
    The Smartest Member of Your Developer Ecosystem: Introducing the Mezmo MCP Server
    Your New AI Assistant for a Smarter Workflow
    The Observability Problem Isn't Data Volume Anymore—It's Context
    Beyond the Pipeline: Data Isn't Oil, It's Power.
    The Platform Engineer's Playbook: Mastering OpenTelemetry & Compliance with Mezmo and Dynatrace
    From Alert to Answer in Seconds: Accelerating Incident Response in Dynatrace
    Taming Your Dynatrace Bill: How to Cut Observability Costs, Not Visibility
    Architecting for Value: A Playbook for Sustainable Observability
    How to Cut Observability Costs with Synthetic Monitoring and Responsive Pipelines
    Unlock Deeper Insights: Introducing GitLab Event Integration with Mezmo
    Introducing the New Mezmo Product Homepage
    The Inconvenient Truth About AI Ethics in Observability
    Observability's Moneyball Moment: How AI Is Changing the Game (Not Ending It)
    Do you Grok It?
    Top Five Reasons Telemetry Pipelines Should Be on Every Engineer’s Radar
    Is It a Cup or a Pot? Helping You Pinpoint the Problem—and Sleep Through the Night
    Smarter Telemetry Pipelines: The Key to Cutting Datadog Costs and Observability Chaos
    Why Datadog Falls Short for Log Management and What to Do Instead
    Telemetry for Modern Apps: Reducing MTTR with Smarter Signals
    Transforming Observability: Simpler, Smarter, and More Affordable Data Control
    Datadog: The Good, The Bad, The Costly
    Mezmo Recognized with 25 G2 Awards for Spring 2025
    Reducing Telemetry Toil with Rapid Pipelining
    Cut Costs, Not Insights:   A Practical Guide to Telemetry Data Optimization
    Webinar Recap: Telemetry Pipeline 101
    Petabyte Scale, Gigabyte Costs: Mezmo’s Evolution from ElasticSearch to Quickwit
    2024 Recap - Highlights of Mezmo’s product enhancements
    My Favorite Observability and DevOps Articles of 2024
    AWS re:Invent ‘24: Generative AI Observability, Platform Engineering, and 99.9995% Availability
    From Gartner IOCS 2024 Conference: AI, Observability Data, and Telemetry Pipelines
    Our team’s learnings from Kubecon: Use Exemplars, Configuring OTel, and OTTL cookbook
    How Mezmo Uses a Telemetry Pipeline to Handle Metrics, Part II
    Webinar Recap: 2024 DORA Report: Accelerate State of DevOps
    Kubecon ‘24 recap: Patent Trolls, OTel Lessons at Scale, and Principle Platform Abstractions
    Announcing Mezmo Flow: Build a Telemetry Pipeline in 15 minutes
    Key Takeaways from the 2024 DORA Report
    Webinar Recap | Telemetry Data Management: Tales from the Trenches
    What are SLOs/SLIs/SLAs?
    Webinar Recap | Next Gen Log Management: Maximize Log Value with Telemetry Pipelines
    Creating In-Stream Alerts for Telemetry Data
    Creating Re-Usable Components for Telemetry Pipelines
    Optimizing Data for Service Management Objective Monitoring
    More Value From Your Logs: Next Generation Log Management from Mezmo
    A Day in the Life of a Mezmo SRE
    Webinar Recap: Applying a Data Engineering Approach to Telemetry Data
    Dogfooding at Mezmo: How we used telemetry pipeline to reduce data volume
    Unlocking Business Insights with Telemetry Pipelines
    Why Your Telemetry (Observability) Pipelines Need to be Responsive
    How Data Profiling Can Reduce Burnout
    Data Optimization Technique: Route Data to Specialized Processing Chains
    Data Privacy Takeaways from Gartner Security & Risk Summit
    Mastering Telemetry Pipelines: Driving Compliance and Data Optimization
    A Recap of Gartner Security and Risk Summit: GenAI, Augmented Cybersecurity, Burnout
    Why Telemetry Pipelines Should Be A Part Of Your Compliance Strategy
    Pipeline Module: Event to Metric
    Telemetry Data Compliance Module
    OpenTelemetry: The Key To Unified Telemetry Data
    Data optimization technique: convert events to metrics
    What’s New With Mezmo: In-stream Alerting
    How Mezmo Used Telemetry Pipeline to Handle Metrics
    Webinar Recap: Mastering Telemetry Pipelines - A DevOps Lifecycle Approach to Data Management
    Open-source Telemetry Pipelines: An Overview
    SRECon Recap: Product Reliability, Burn Out, and more
    Webinar Recap: How to Manage Telemetry Data with Confidence
    Webinar Recap: Myths and Realities in Telemetry Data Handling
    Using Vector to Build a Telemetry Pipeline Solution
    Managing Telemetry Data Overflow in Kubernetes with Resource Quotas and Limits
    How To Optimize Telemetry Pipelines For Better Observability and Security
    Gartner IOCS Conference Recap: Monitoring and Observing Environments with Telemetry Pipelines
    AWS re:Invent 2023 highlights: Observability at Stripe, Capital One, and McDonald’s
    Webinar Recap: Best Practices for Observability Pipelines
    Introducing Responsive Pipelines from Mezmo
    My First KubeCon - Tales of the K8’s community, DE&I, sustainability, and OTel
    Modernize Telemetry Pipeline Management with Mezmo Pipeline as Code
    How To Profile and Optimize Telemetry Data: A Deep Dive
    Kubernetes Telemetry Data Optimization in Five Steps with Mezmo
    Introducing Mezmo Edge: A Secure Approach To Telemetry Data
    Understand Kubernetes Telemetry Data Immediately With Mezmo’s Welcome Pipeline
    Unearthing Gold: Deriving Metrics from Logs with Mezmo Telemetry Pipeline
    Webinar Recap: The Single Pane of Glass Myth
    Empower Observability Engineers: Enhance Engineering With Mezmo
    Webinar Recap: How to Get More Out of Your Log Data
    Unraveling the Log Data Explosion: New Market Research Shows Trends and Challenges
    Webinar Recap: Unlocking the Full Value of Telemetry Data
    Data-Driven Decision Making: Leveraging Metrics and Logs-to-Metrics Processors
    How To Configure The Mezmo Telemetry Pipeline
    Supercharge Elasticsearch Observability With Telemetry Pipelines
    Enhancing Grafana Observability With Telemetry Pipelines
    Optimizing Your Splunk Experience with Telemetry Pipelines
    Webinar Recap: Unlocking Business Performance with Telemetry Data
    Enhancing Datadog Observability with Telemetry Pipelines