Tucker Callaway on the State of the Observability Market

4 MIN READ
MIN READ
TABLE OF CONTENTS
    4 MIN READ
    MIN READ

    LogDNA is now Mezmo but the insights you know and love are here to stay.

    Listen here and read along below.

    Liesse: Tucker Callaway is the CEO of LogDNA. He has more than 20 years of experience in enterprise software with an emphasis on developer and DevOps tools. Tucker drives innovation, experimentation, and a culture of collaboration at LogDNA, three ingredients that are essential for the type of growth that we've experienced over the last few years.

    LogDNA is a comprehensive platform to control all of your log data. It enables teams to ingest and route massive amounts of data from any source to any source. This capability fuels enterprise level application development and delivery, security, and compliance use cases where the ability to use log data in real time is mission critical.

    I'm Liesse from LogDNA. Today, Tucker and I are going to talk about how LogDNA sees the market, including the need to extract more value from observability data. 

    Hi Tucker. It's great to be having this conversation with you. To set the foundation for this discussion, can you tell us what market categories we're going to talk about today?

    Tucker: Hi Liesse, thanks for having me on—always fun. Okay, so the market categories we're going to talk about today are log management, which is of course is a subset of observability, and cybersecurity. 

    Liesse: Awesome. Most people associate LogDNA with log management, so let's start there. How do you see the log management category changing over the next few years? 

    Tucker: It's a fascinating question, obviously one that I spend a lot of time on—I guess we both do. I believe that log management, as we know it today, is going to be very much disrupted. In fact, in many ways it has been disrupted. Certainly I understand the draw to consolidated observability platforms that include log management. But, what I find to be really interesting is, as log management as a traditional discipline starts to be consumed more and more by observability, the log data is actually going in a different direction. Right? Whereas we see kind of more of a simplification and consolidation of log management as a discipline, the data of logs is increasingly being consumed by more use cases and therefore by more consumers, which means that there are more requirements and capabilities that have to be applied, or more disciplines that need to be applied to the log data as it's in motion. And it needs to get to more destinations than ever before. 

    Liesse: Yeah, absolutely. Can you talk a little bit about pipeline capabilities and what you think that means in relation to log data? 

    Tucker: Yeah. I mean, that's the foundation of it really. The pipeline is what I believe log data is about today. When I say pipeline, I mean the ability to route, store, process, enhance, redact, exclude—all those things. All the different operations you want to take on this data in motion, that's going to just continue to grow because of these use cases and because of the varied level of consumers that are going to continue to need to engage with log data. 

    Liesse: How do you see this influence the cybersecurity and observability categories? 

    Tucker: So these categories are just continuing to grow the relevance of both observability and cybersecurity. Some might argue that they will start to converge with each other. One of the things that underpins all of this is the amount of data that is common between the two.

    It's really interesting, I mean we see this again—massive growth in data related to this with the growth in servers, microservices, cloud migrations, like all the things that are happening is just creating more and more data. That, of course, creates more frustration for customers trying to manage that data effectively. And you combine that with the threat landscape that exists today. The cybersecurity challenges are just on the rise. And so, trying to find the signal from that noise and trying to take action on this amount of data, with the modern systems and applications produced today, is a fundamental challenge for our customers. That’s having a huge influence on the relevance of log data to cybersecurity and observability use cases. 

    Liesse: Absolutely. And we've seen that some of our most successful customers operate with a DevOps mindset. What do you think the impact of that is on the market trends that you mentioned? 

    Tucker: Yeah, so it's not just DevOps, of course, it’s DevSecOps now.

    So we have three audiences in the mix which, as you bring these different teams together, trying to empower them, getting them to shift left on operational disciplines, it creates a really interesting dynamic change. So, fundamentally it was DevOps before, so you kind of had two teams trying to collaborate. Moving to DevSecOps is now three teams trying to collaborate. And so, it's kind of like in my personal world, having two kids is a lot more than having one kid. 

    Liesse: and having three. 

    Tucker: I dunno about three, but, if you're the SRE team or the Ops team trying to work with one other team, that's hard enough. And now you’ve got a third team in the mix and it feels like an explosion. I think that's going to change that dynamic because it very much changes the number of consumers of the data. And so, that accelerates the need to work with autonomy, to make sure people have to be aligned on the innovation that's happening. And out of all that, they expect greater access to data. They need to access the source data, they need data in the tools that they use, and those tools allow them to take action. So it's not good enough, in this kind of new dynamic, to just have a single pane of glass. But you have to be able to get people to a point where they can operate in the world that they live in, not switching context into a single pane of glass. 

    Liesse: Yeah, absolutely. And I've heard you talk about a flywheel effect that this has created where DevSecOps is at the center and then branching out from there are innovation, data, and action. Can you talk us through each of those components of the flywheel? 

    Tucker: Yeah. So, it’s interesting in that it's somewhat basic and something we all take for granted, but it's creating a dynamic that we do call, “the flywheel.” So the innovation in terms of the way not only the ways we deliver technology, but the methodologies we use to deliver them is changing rapidly and it's creating a much greater surface area. It's creating a much larger attack vector too for cybersecurity use cases. What I mean by that is, you have microservices that create a proliferation of location and number of services and then you put DevOps—or agile DevOps-type methodologies—for delivering on top of that where people are delivering continuously. All that together is creating a huge amount of data now. And when you have that data, it becomes harder to take action. Once you take action on it, then you have the ability to learn about threats. You have the ability to learn about performance. You have the ability to learn about your customer, which then drives back into the innovation curve. And so you're kind of in a virtuous cycle there. 

    What is the big challenge that's being addressed today? What we're trying to address today is that you have the DevSecOps teams that just sit in the middle of that, trying to handle this cycle of innovation and data and how do I solve the cost curve problem that sits within action—or it sits within the data to take action—and keep the flywheel going?

    Liesse: Because you just mentioned the cost curve problem, I'll let you explain it to our listeners. What is the cost curve problem?

    Tucker: Yeah, sorry that that's something we refer to internally all the time, but it's basically the idea that as the amount of log data grows, the value that people drive from it is not growing linearly with that growth in data. And data, I'm using in this case, as almost a proxy for cost. So the cost of this data is growing, at least linearly, probably exponentially, but the value people derive out of it isn’t. And that's the problem that we're here to solve, especially as it relates to data in motion. 

    Liesse: Absolutely. Thank you for that explanation. So let's dive a little bit deeper into the cybersecurity use case. Because obviously that's really interesting and very relevant for a lot of people who are listening today. So, what is the signal that you’re seeing that observability data is having a bigger and bigger impact on cybersecurity?

    Tucker: Yeah, so at the top level within the cybersecurity category, we see this huge rise and rush to people to address XDR solutions. So XDR solutions combined the traditional SIEM constructs with more of endpoint detection and response constructs to bring together a more composite view of the threat environment across both endpoints and servers.

    What we see when that happens is we see now the cybersecurity providers are forced to deal with the scale and volume of log files and the spikiness—the unstructured nature—of the data, all the beautiful things that come with logs that are hard to deal with. And so, that creates this huge need when it comes to facilitating cybersecurity to handle log files effectively. So that's been a big driver. Of course, if you look out at the market, you'll see a number of the larger players in the security space making investments in observability, you'll see a lot of the larger observability players making investments in security. And so, we see this convergence happening. If you look at the TAM for cybersecurity, analytics, and observability, we just see continued growth. So we see the category just continuing to accelerate over time. 

    Liesse: Absolutely. So now that we've laid the foundation for where the market is, I would love to hear from your perspective what the market needs right now. 

    Tucker: So we break down the needs in this space into three fundamental categories. The first is collect and ingest. The second is process and route. And the third is store and analyze. When we talk about collect and ingest, we think about the agent that ingests the data. This is the thing that happens on the client side. I believe that what is required in collect and ingest is a vendor agnostic capability, potentially open source, that allows for a ubiquitous way to collect this kind of machine data. On the store and analyze side we see tremendous opportunity there. Our traditional solution is in the store and analyze part of log management. But increasingly over time, I believe that the cloud data warehouse providers will continue to invest in, and provide, offerings there. And when I think about that, I think about data at rest. 

    Where I think the real opportunity is, that's missing in the market today, is this concept of process and route. So enterprises require the ability to process and route this data, regardless of the source and regardless of the destination. But, they also have the need to take action on that data as it goes by—as it's in motion. And so, that kind of pipeline that we referred to is where we're focused and what we're excited to bring to market right now. 

    Liesse: Yeah. Many of the capabilities that you just mentioned are foundational to the LogDNA product—we've been working on them since day one. And right now I know we're in the process of building out a lot of new functionality that will enhance our ability to address these needs, specifically in the process and route category, which is so important. So I would love for you to tell us a little bit about the early access program that LogDNA just announced.

    Tucker: Yeah. Great. So you know, as you mentioned, we've always been focused on the data pipeline space. That's actually our differentiator. What makes us unique is our pipeline capabilities and our ability to so quickly ingest, process, route, analyze, and enhance all the data that comes through. That, of course, has traditionally gone into our systems. What we're excited about with the streaming capability that we've released—we've released on IBM where we have a hundred customers using it today—and we are announcing early access for our own environment, is a streaming capability that allows us to ingest this data—massive amounts of unstructured data—and allows us to take it from whatever location it is sourced from to whatever location it needs to get to. And so, that doesn't necessarily mean that it has to go into the traditional LogDNA anymore. What's most important is that we get it to the right destination for the right consumers, as we talked about. 

    Liesse: Yeah absolutely. And this is interesting because it's ideally suited for use cases in cybersecurity and enterprise level application delivery, where more data can deliver dramatic outcomes if it's accessible in real time. And that, I think, is something that we're hearing from the market that they really need and don't have a good solution for today.

    Tucker: It's the real-time nature of it that's the challenge, right? It’s not that providers out there can't ingest data, it's more like can they ingest it in the spiky ways like log data, as we know, and have learned over the last four to five years can increase a hundred X at a moment's notice, right? You have a sale or some external event hits and all of a sudden the traffic that hits an application can massively increase, and yet we still need to be able to process it in real time. So being cloud native, having the ability to just scale, you know, work at cloud scale, work at hyperscale, is very important for us. And that's what makes us uniquely suited to actually handle this type of data to enable those cybersecurity use cases. 

    Liesse: Yeah. Awesome. I know people are super excited about that. So for anyone who would like to learn more about this early access program, you can visit go.logdna.com/streaming-early-access.

    Okay, Tucker Callaway, CEO of LogDNA, thanks so much for a great conversation about the observability data market. I'm Liesse Jones, thanks for listening!

    If you'd like to learn more about LogDNA, visit us at logdna.com.

    false
    false
    Liesse Jones

    10.18.21

    Liesse Jones is the Director of Marketing Communications at Mezmo and the host of the DevOps State of Mind podcast. She manages content strategy, PR, analyst relations, and media. She spends her time making sure that the brand is consistent across the board so that the world knows how cool log data really is.

    SHARE ARTICLE

    RSS FEED