Unlocking Business Insights with Telemetry Pipelines

4 MIN READ
8 MIN READ

Imagine running a large company where data-driven decisions give you a competitive edge. You use a lot of business intelligence tools that tap into vast amounts of data, such as sales figures, inventories, and expenses. This analysis tells you how your company is performing. However, it does not reveal how your "company infrastructure" is performing. This crucial information comes from your systems in the form of telemetry data, such as logs and events.

Telemetry data can tell you if your users are abandoning carts at an alarming rate, or they have trouble booking flights, or they are struggling with the slow response of your website, much earlier than your BI systems can. This information is often not surfaced to business users in a timely fashion, missing out on the potential areas of improvement. But not anymore with the modern telemetry pipelines.

What is a telemetry pipeline 

A telemetry pipeline manages the collection, enrichment, transformation, and routing of telemetry data from different sources to different destinations. It offers significant benefits for a wide range of use cases, ranging from performance monitoring to user behavior analysis. Telemetry pipelines help organizations optimize operations, improve troubleshooting, and gain valuable business insights.

How telemetry pipelines can generate valuable business insights

Gartner notes that telemetry represents a rich and largely untapped source of business insight beyond event and incident response. Often, a number of business metrics are embedded in the logs, and if parsed correctly and aggregated, they can offer deep insights about your business operations that can help deliver a better customer experience and protect your revenue.

For example, in the sample log stream below, we can see the products and quantities ordered from a commerce website and the credit cards used to order them. However, sending credit card information in logs could be a compliance issue, and we want to detect how often such an issue occurs. A telemetry pipeline can help capture such occurrences and send them as metrics to your BI or visualization tools.

Sample log stream data showing PII data like products, quantity ordered, credit cards used, etc.

Now using the pipeline, we are detecting credit card violations and converting them into metrics as shown below:

Sample log data stream with credit card number highlighted in red at the top and Mezmo's user interface demonstrating how credit card data can be encrypted and count the number of violations

Also, we can capture other business metrics such as ordered products as shown below:

A sample business metric workflow in Mezmo

Once all the information is captured it can be easily sent to your visualization tool, such as Grafana to create the dashboard that your business teams can monitor. For example, the dashboard below shows the credit card violations as well as the count of ordered products.

A sample visualization of this business insights in Grafana

Telemetry pipelines continuously collect and analyze data from various sources, like user interaction logs or system performance metrics. This real-time data collection and analysis helps you identify trends, detect anomalies, and monitor user behavior. 

Moreover, by converting logs into metrics, you can reduce event data volume by up to 90%, streamlining data into actionable insights at a much lower cost. This is really helpful when you want to control log data volumes and costs. Rather than sending complete logs to your observability platform, you can identify a pattern and send a count of how many times that pattern occurs in your log stream over a period of time.

Mezmo: Your key to better business insights

Mezmo offers comprehensive visibility into your telemetry data, centralizing it to enable deeper business insights. You can ingest your log data from various sources to a single platform, with multiple options for parsing, transformations, and alert settings. This flexibility, along with the enrichment and routing capabilities, ensures that all your telemetry data can be used to generate insights in real time.

Leverage Mezmo to:

  • Extract metrics from logs and events and send them to Grafana, Sysdig, Alerting, Log Analysis, and other tools.
  • Get metrics from Kubernetes telemetry.
  • Gain product insights by monitoring product usage.
  • Convert bulky logs into metrics and extract metrics from them to reduce the data volumes and associated costs.
  • Reduce cost in DataDog, ensure consistency of data between monitoring solutions, and deliver faster triaging.
  • Use telemetry pipeline to organize, filter, and pass through critical data from Mezmo Log Analysis into DataDog as a destination.

In addition to telemetry pipeline, Mezmo telemetry data platform also offers Log Analysis to help you aggregate, search, and visualize critical log events to identify trends and unlock the power of log data. Use the insights to discover where your user experience can be improved, data storage costs can be reduced, and operations can be streamlined.

Discover Mezmo and strategically harness telemetry data for better business insights.

Our customer success stories

Here are some exciting real-life customer stories of how telemetry data of logs, events, and metrics helped improve performance and optimize resources.

  1. A top airline wanted to monitor the efficiency of its CDN (Content Delivery Network) by tracking the average number of cache hits vs. the number of cache misses over a certain period of time. Telemetry pipelines helped them quickly calculate and analyze these metrics on the fly. The result? With the ratio of cache hits to misses, they could take action to improve the overall success of the CDN.
  2. In another example, the same airline wanted to check the number of times specific APIs are used so that they could decommission unused APIs. They were able to analyze this quickly with telemetry pipelines, helping simplify their operations.
  3. Another customer was looking to track an event over the course of an hour, create metrics, and then send that data to Log Analysis. The telemetry pipelines helped with this transformation, enhancing the efficiency of their analytics.
  4. Systems engineers often need to manually generate metrics, which not only increases their workload but also creates potential traps for errors. For a large company, engineers generated metrics and then sent them to Prometheus/Grafana for rich visualizations. They wanted to automate this operation to reduce the workload and improve the accuracy. Telemetry pipelines helped extract the data from the log line, make an event to metric, and then send that directly to Prometheus/Grafana, saving engineers hours of toil.

Discover the potential of telemetry pipelines in unlocking business insights for your organization. Request a demo today.

Table of Contents

    Share Article

    RSS Feed

    Next blog post
    You're viewing our latest blog post.
    Previous blog post
    You're viewing our oldest blog post.
    Mezmo’s AI-powered Site Reliability Engineering (SRE) agent for Root Cause Analysis (RCA)
    What is Active Telemetry
    Launching an agentic SRE for root cause analysis
    Paving the way for a new era: Mezmo's Active Telemetry
    The Answer to SRE Agent Failures: Context Engineering
    Empowering an MCP server with a telemetry pipeline
    The Debugging Bottleneck: A Manual Log-Sifting Expedition
    The Smartest Member of Your Developer Ecosystem: Introducing the Mezmo MCP Server
    Your New AI Assistant for a Smarter Workflow
    The Observability Problem Isn't Data Volume Anymore—It's Context
    Beyond the Pipeline: Data Isn't Oil, It's Power.
    The Platform Engineer's Playbook: Mastering OpenTelemetry & Compliance with Mezmo and Dynatrace
    From Alert to Answer in Seconds: Accelerating Incident Response in Dynatrace
    Taming Your Dynatrace Bill: How to Cut Observability Costs, Not Visibility
    Architecting for Value: A Playbook for Sustainable Observability
    How to Cut Observability Costs with Synthetic Monitoring and Responsive Pipelines
    Unlock Deeper Insights: Introducing GitLab Event Integration with Mezmo
    Introducing the New Mezmo Product Homepage
    The Inconvenient Truth About AI Ethics in Observability
    Observability's Moneyball Moment: How AI Is Changing the Game (Not Ending It)
    Do you Grok It?
    Top Five Reasons Telemetry Pipelines Should Be on Every Engineer’s Radar
    Is It a Cup or a Pot? Helping You Pinpoint the Problem—and Sleep Through the Night
    Smarter Telemetry Pipelines: The Key to Cutting Datadog Costs and Observability Chaos
    Why Datadog Falls Short for Log Management and What to Do Instead
    Telemetry for Modern Apps: Reducing MTTR with Smarter Signals
    Transforming Observability: Simpler, Smarter, and More Affordable Data Control
    Datadog: The Good, The Bad, The Costly
    Mezmo Recognized with 25 G2 Awards for Spring 2025
    Reducing Telemetry Toil with Rapid Pipelining
    Cut Costs, Not Insights:   A Practical Guide to Telemetry Data Optimization
    Webinar Recap: Telemetry Pipeline 101
    Petabyte Scale, Gigabyte Costs: Mezmo’s Evolution from ElasticSearch to Quickwit
    2024 Recap - Highlights of Mezmo’s product enhancements
    My Favorite Observability and DevOps Articles of 2024
    AWS re:Invent ‘24: Generative AI Observability, Platform Engineering, and 99.9995% Availability
    From Gartner IOCS 2024 Conference: AI, Observability Data, and Telemetry Pipelines
    Our team’s learnings from Kubecon: Use Exemplars, Configuring OTel, and OTTL cookbook
    How Mezmo Uses a Telemetry Pipeline to Handle Metrics, Part II
    Webinar Recap: 2024 DORA Report: Accelerate State of DevOps
    Kubecon ‘24 recap: Patent Trolls, OTel Lessons at Scale, and Principle Platform Abstractions
    Announcing Mezmo Flow: Build a Telemetry Pipeline in 15 minutes
    Key Takeaways from the 2024 DORA Report
    Webinar Recap | Telemetry Data Management: Tales from the Trenches
    What are SLOs/SLIs/SLAs?
    Webinar Recap | Next Gen Log Management: Maximize Log Value with Telemetry Pipelines
    Creating In-Stream Alerts for Telemetry Data
    Creating Re-Usable Components for Telemetry Pipelines
    Optimizing Data for Service Management Objective Monitoring
    More Value From Your Logs: Next Generation Log Management from Mezmo
    A Day in the Life of a Mezmo SRE
    Webinar Recap: Applying a Data Engineering Approach to Telemetry Data
    Dogfooding at Mezmo: How we used telemetry pipeline to reduce data volume
    Unlocking Business Insights with Telemetry Pipelines
    Why Your Telemetry (Observability) Pipelines Need to be Responsive
    How Data Profiling Can Reduce Burnout
    Data Optimization Technique: Route Data to Specialized Processing Chains
    Data Privacy Takeaways from Gartner Security & Risk Summit
    Mastering Telemetry Pipelines: Driving Compliance and Data Optimization
    A Recap of Gartner Security and Risk Summit: GenAI, Augmented Cybersecurity, Burnout
    Why Telemetry Pipelines Should Be A Part Of Your Compliance Strategy
    Pipeline Module: Event to Metric
    Telemetry Data Compliance Module
    OpenTelemetry: The Key To Unified Telemetry Data
    Data optimization technique: convert events to metrics
    What’s New With Mezmo: In-stream Alerting
    How Mezmo Used Telemetry Pipeline to Handle Metrics
    Webinar Recap: Mastering Telemetry Pipelines - A DevOps Lifecycle Approach to Data Management
    Open-source Telemetry Pipelines: An Overview
    SRECon Recap: Product Reliability, Burn Out, and more
    Webinar Recap: How to Manage Telemetry Data with Confidence
    Webinar Recap: Myths and Realities in Telemetry Data Handling
    Using Vector to Build a Telemetry Pipeline Solution
    Managing Telemetry Data Overflow in Kubernetes with Resource Quotas and Limits
    How To Optimize Telemetry Pipelines For Better Observability and Security
    Gartner IOCS Conference Recap: Monitoring and Observing Environments with Telemetry Pipelines
    AWS re:Invent 2023 highlights: Observability at Stripe, Capital One, and McDonald’s
    Webinar Recap: Best Practices for Observability Pipelines
    Introducing Responsive Pipelines from Mezmo
    My First KubeCon - Tales of the K8’s community, DE&I, sustainability, and OTel
    Modernize Telemetry Pipeline Management with Mezmo Pipeline as Code
    How To Profile and Optimize Telemetry Data: A Deep Dive
    Kubernetes Telemetry Data Optimization in Five Steps with Mezmo
    Introducing Mezmo Edge: A Secure Approach To Telemetry Data
    Understand Kubernetes Telemetry Data Immediately With Mezmo’s Welcome Pipeline
    Unearthing Gold: Deriving Metrics from Logs with Mezmo Telemetry Pipeline
    Webinar Recap: The Single Pane of Glass Myth
    Empower Observability Engineers: Enhance Engineering With Mezmo
    Webinar Recap: How to Get More Out of Your Log Data
    Unraveling the Log Data Explosion: New Market Research Shows Trends and Challenges
    Webinar Recap: Unlocking the Full Value of Telemetry Data
    Data-Driven Decision Making: Leveraging Metrics and Logs-to-Metrics Processors
    How To Configure The Mezmo Telemetry Pipeline
    Supercharge Elasticsearch Observability With Telemetry Pipelines
    Enhancing Grafana Observability With Telemetry Pipelines
    Optimizing Your Splunk Experience with Telemetry Pipelines
    Webinar Recap: Unlocking Business Performance with Telemetry Data
    Enhancing Datadog Observability with Telemetry Pipelines
    Transforming Your Data With Telemetry Pipelines
    6 Steps to Implementing a Telemetry Pipeline