The Observability Data Opportunity
9.8.21
LogDNA is now Mezmo but the product you know and live is here to stay.
Observability data, and especially log data, is immensely valuable for modern business. Making the right decision—from monitoring the bits and bytes of application code to the actions in the security incident response center—requires the right people to generate insights from data as fast as possible.
Alongside the rise of machine data is a widespread cultural and operational change across enterprises to DevOps (and DevSecOps). It’s a working style that brings together different teams and shifts the development approach left, empowering developers with greater accountability over the operations and security of these applications. With a greater DevSecOps orientation, teams are accelerating with autonomy, aligned around innovation, and expected to have greater access to data.
The rapid growth of Mezmo, formerly known as LogDNA, illustrates just how valuable data is and our unique ability to remove the friction of how autonomous teams use it for application troubleshooting and debugging. With Mezmo's logging platform and software solution firmly embedded in enterprise DevOps teams, our customers are turning to us to solve a deeper level of business and technology issues, including the need to:
- Centralize data from any source in a cost-effective manner
- Leverage data to solve more problems across the company
- Get data to any destination or data consumer in the company in real time
Understanding the Observability Data Mess
Log data is the largest, and arguably most important, observability data. It underpins all applications and systems. Yet, despite the perceived value of all of this data and the hype around observability, the vast majority of observability data remains dark. It’s wasted and expensive.
What’s keeping this data in the dark? Our customers tell us that the scale, complexity, wide variety of data consumers, and runaway cost make it impossible to get value out of all of their machine data. A number of technical and organizational challenges are also holding them back, including:
- Deep-rooted data and organizational silos that require distinct data workflows to enable the users of those tools. In many companies, functions like application performance, security, and real-time decision making are owned by specific teams or titles. Even the tools that they use often exist in silos (for example, only giving log management tooling to IT operations folks), which makes it near impossible to get access to the information you need to understand the functions of another team. This creates a massive barrier to enterprise-wide sharing.
- The shift to cloud-native and hybrid cloud environments delivers massive volumes of data, as well as tremendous diversity, erratic spikes, and complexity. All of these factors combine to pound enterprises with cost, compliance, and other data management and operational headaches. In addition, legacy big data approaches aren’t suited for this type of structured and unstructured data which needs to be available in real time.
- Heavy-handed single-pane-of-glass approaches are not sufficient to transform and route observability data to the appropriate storage destinations in a timely and cost-effective way. They cannot be everything to everybody, and results in a watered-down experience, leaving specialized teams frustrated. Machine data streaming requires specialization apart from one business or operational function. This dynamic continues to reinforce technology and organizational silos, and bottles up valuable data to rot in storage.
How Mezmo Uniquely Unlocks the Value of Machine Data
As the amount of log data grows, the value of it hasn’t historically kept pace. We call this the machine data cost-curve problem. It’s the problem we intend to tackle head-on and solve.
The value of log data in addressing the multitude of enterprise needs—from developer productivity to cybersecurity—requires the ability to ingest data in a way that is agnostic to source; the ability to process and route that data, regardless of the destination; and the ability to store and analyze that data in a way that meets the requirements of each of its consumers.
Mezmo has always believed that optimizing the ingestion pipeline was essential for solving the industry need for true observability. Our innovations in this area form a powerful foundation to meet today’s critical scale, storage, and routing requirements.
- Mezmo is cloud native and built for scale. This has made it possible for Mezmo to ingest, process, route, and analyze petabytes of log data for customers, and store that data at an affordable rate. Granular controls also help control costs (from those inevitable erratic spikes) and meet compliance demands.
- We’ve built a platform that solves the challenge of scaling to meet the spiky demands of machine data, a need shared by enterprises that are migrating to the cloud and undergoing a digital transformation.
- By ingesting, processing, streaming, and storing machine data at hyperscale, Mezmo gives enterprises the freedom to deploy that data to the application and human specialists who best know how to unlock its value. Data isn’t bottled up in a single pane of glass, but unleashed for use across all possible panes of glass.
Already, these platform strengths allow some of our customers to leverage data from Mezmo in new machine data pipeline use cases. For example:
- A major U.S. airline deployed Mezmo to give developers and SREs across 27 different agile product teams access to their log data. These teams leverage data from many microservice and monolithic applications across multiple clouds and on-premise environments with Mezmo and stream it into other observability tools for a diverse set of use cases.
- A major cloud provider uses Mezmo as their embedded provider for both logging and monitoring. Our SaaS and private cloud solutions are deployed across 11 global data centers and empower both internal teams, and enterprises using cloud services to fully understand their application and system performance. And now, 100 accounts on their cloud platform are streaming logs from Mezmo to their SIEM.
- One of the world’s largest e-commerce companies uses Mezmo to capture IoT data from their warehouse robots. Centralizing their logs into a single platform allows them to troubleshoot issues with their robots as they happen in the field. Seventy five percent of employees in the robotics division of this company access their logs, which is made possible by having a tool that’s optimized for a DevOps culture.
Join the New Mezmo Streaming Early-access Program
You can join these companies to overcome machine data management headaches and unlock more value from your log data across the enterprise.
Today, we announced the early-access beta of a powerful new platform feature: Mezmo Streaming.
Mezmo Streaming lets enterprises ingest all of their log data to a single platform, and then route it for any enterprise use case. This new feature takes full advantage of Mezmo's unparalleled ability to quickly ingest massive amounts of structured and unstructured data, normalize it, and have granular control over storage to control costs and meet compliance needs. It’s ideally suited for use cases in cybersecurity and enterprise-level application delivery where more data can deliver dramatic outcomes if it’s accessible in real time.
For too long, enterprises have had to make difficult choices around how to use all of their machine data while controlling skyrocketing costs. We will continue to build a comprehensive platform that enables anyone to ingest, process, route, analyze, and store all of their log data in a way that makes sense for them.
I’m excited to continue sharing the progress we are making in this space and looking forward to adding even more value that enables you to build capabilities on top of your data in motion.
SHARE ARTICLE
RELATED ARTICLES