Gartner IOCS Conference Recap: Monitoring and Observing Environments with Telemetry Pipelines

4 MIN READ
7 MIN READ

Last week, I attended the Gartner IT Infrastructure, Operations & Cloud Strategies Conference (IOCS). Gartner IOCS is my favorite conference every year because of the quality and level of the presentations. Gartner analysts deliver most sessions and put a lot of effort into the presentations and supporting research.

I’d like to highlight two sessions that I found to be very informative. One was “Use Telemetry Pipelines to Efficiently Monitor Your Hybrid Environments” by Gregg Siegfried, VP Analyst at Gartner. The second was “The Future of Observability” by Mrudula Bangera, Director Analyst at Gartner. Here are some highlights:

Use Telemetry Pipelines to Efficiently Monitor Your Hybrid Environments

Gregg Siegfried started by saying the I&O (Infrastructure and Operations) has long had an image problem. Telemetry is treated like wastewater, and we, the plumbers, are left to make sure it's properly dealt with. Yes, there’s an operational aspect to gathering telemetry data in terms of service reliability and application performance. However, business-critical insights are in the data if you “know where to look.”

Data Engineering Meets Telemetry Pipelines

Data engineering is precisely what I&O needs. Happily, there are tools to help. Telemetry Pipelines are the pathway to engineering operational telemetry data. Gregg indicated that data engineering principles must be applied to infrastructure and operational telemetry data.

Data engineering principles are required to manage and optimize operational telemetry data.

Telemetry Pipelines Help Data Engineering

Gregg noted that the arrangement of I&O-specific telemetry pipelines mirrors that of the general-purpose data engineering pipeline. However, there is some specific terminology to know for operational telemetry data.

A telemetry pipeline can route highly optimized data to one or more destinations.

Gartner defines the functions of a telemetry pipeline as Collect, Transform, Enrich, and Route. Collection is straightforward and may involve a vendor agent or other popular interfaces, including syslog, Fluentd, Fluent Bit, Logstash, OTLP, or Splunk forwarders. The Route function sends the data to one or more destinations based on use case. Transform manipulates the data into more efficient forms. You can change the structure and format, turn logs into metrics, rename fields, mask fields, sample, filter, normalize, reduce or any of a wide variety of transformation processes. 

What Does Data “Enrichment” Mean?

Enrichment means adding context, sometimes from external sources to your data in motion. Examples of data enrichment would include timestamps, geolocation data, names, IDs or anything that can be correlated with the data to help analysis at the destination system. Gregg also categorized data rehydration as an enrichment function. 

Use Cases for Telemetry Pipelines

Gregg described the common use cases for telemetry pipelines.

Cost Control

Some IT organizations send terabytes of telemetry data without understanding what is needed or what can be discarded or placed into object storage. A telemetry pipeline is a natural place to make these decisions before the data incurs a toll by driving up ingress charges on the destination system. For example, a telemetry pipeline filters, deduplicates, summarizes, or routes data to low-cost object storage.

Consolidation From the Edge

Centralized or edge pipelines consolidate and organize data within the user’s environment. With a SaaS service, central configuration control and deployment can significantly enhance the ability to scale data optimization. Gregg pointed out that it is not always a requirement to egress data to an external service or SaaS to process it. This can be a huge advantage for organizations concerned with data integrity.

Editorial note: Mezmo introduced Edge Pipelines in October 2023; read Introducing Mezmo Edge: A Secure Approach To Telemetry Data.

Maintaining Unified Taxonomy

Different teams and tools will naturally name and categorize things differently. For example, one team will use the term “Host ID,” and another will use “Node Name.” This simple difference can complicate backend analytics. However, a telemetry pipeline can normalize such differences to improve consistency, clarity, and analysis. In addition, if your telemetry data contains PII or other confidential information, a centralized telemetry pipeline can apply a consistent set of rules for masking or redaction instead of leaving that work to be repeated by every individual team.

OpenTelemetry - Many to Many

It’s not uncommon to have many collectors of telemetry data and multiple observability and analytics tools. For example, you may have many OpenTelemetry collectors. This creates a “many to many” complexity that can be simplified with a telemetry pipeline. The telemetry pipeline can also co-reside with a centralized control plane, easing the management of many distributed or edge-located collectors.

Use Case Summary

Overall, a telemetry pipeline can significantly reduce costs, increase efficiency, and improve collaboration. Gregg used the expression “manage your telemetry, or it will manage you.”

“Manage Your Telemetry, or it Will Manage You”

Open-Source Telemetry Pipelines

In addition to overviewing various telemetry pipeline vendors, including Mezmo, Gregg went on to describe some of the open-source options for telemetry pipelines.

Gregg noted that open-source Vector (acquired by Datadog) is a good option because it is optimized as a telemetry pipeline rather than a generic data streaming function such as Kafka. Vector is well-documented, has a wide selection of sources and sinks, and can be deployed in a highly available manner. If you are staffed to support open-source at this scale, Vector is preferred because of its core telemetry pipeline functionality. Also mentioned as open-source options were observeIQ’s BindPlane OP, Apache Kafka, and Apache NiFi.

Recommendations

In conclusion, the benefits of telemetry pipelines include cost reduction, analysis simplification, and improved incident response.

Gartner highlighted several recommendations when considering telemetry pipelines.

The Future of Observability

Mrudula Bangera started by saying that monitoring is dead. That definitely got everybody’s attention! Monitoring fails to provide the context to understand the “whys” behind anomalies, making root cause identification very difficult. Whereas observability helps with unknown unknowns, answers you don’t find in your monitoring dashboard.

“Monitoring is Dead”

More than Metrics, Logs, and Traces

So, observability is the future, but it is not delivered by one tool.  Observability is delivered through a combination of capabilities, including the ability to analyze metrics, logs, and traces. However, as Mrudula explained, observability should also include telemetry data about APIs, networking, service mesh, service topology, as well as business context.

Historically, telemetry data was gathered by vendor-specific agents. But increasingly, it is important to consider standards such as OpenTelemetry or eBPF (extended Berkeley Packet Filter) and other open standards. Mrudula explained that the process of instrumenting for telemetry data is becoming more vendor-agnostic.

Telemetry Data Overload Causes Pain

She also explained that the huge volumes of telemetry data generated by observability agents and collectors drive unpredictability in costs. Sometimes, this data can be in the Petabyte scale, incurring huge expenses for observability solution subscribers.

Observability costs use data input quantity to meter pricing, and because most users do not control or optimize their telemetry data, costs can spike unexpectedly, leaving observability subscribers helpless.

Telemetry Pipelines Can Help 

Mrudula explained that the telemetry data overload and lack of control problem can be solved using a Telemetry Pipeline. A telemetry pipeline can filter the data, derive metrics, and route the telemetry to the appropriate tools and teams or low-cost storage, significantly increasing efficiency.

In this slide, shown during the Future of Observability presentation, Gartner recommends deploying a telemetry pipeline to reduce costs and stay in control of your data.

About Mezmo

In the Gartner presentations last week, vendors were not the main focus, but I’d like to take a moment to describe Mezmo’s solution.

Mezmo offers a Telemetry Pipeline deployable in your environment and centrally managed, or deployable in the Mezmo Cloud. Unlike competitive solutions, both Edge and Cloud use the same control plane and are managed as a single set of pipelines. Mezmo supports a wide variety of sources, including OTLP, Azure Event Hub, AWS S3, Kafka, Kubernetes, Datadog agent, and Splunk HEC. Processors include the ability to parse data from common sources, dedupe, filter, sample, and transform logs into metrics - just to name a few. Many popular destinations are supported, including Datadog, Splunk, Grafana, or S3 for low-cost storage.

A visual user interface makes Recipe selection and Pipeline configuration easy. If preferred, Pipelines can be created as code and automated using Terraform. All Mezmo functions are accessible via APIs.

The Mezmo workflow starts with Understanding your data and then recommending pre-configured pipeline Recipes to optimize the data for common log patterns.

If you’d like to understand your data better and quickly realize the power of what a telemetry pipeline can do, let’s get in touch!

Table of Contents

    Share Article

    RSS Feed

    Next blog post
    You're viewing our latest blog post.
    Previous blog post
    You're viewing our oldest blog post.
    The Observability Stack is Collapsing: Why Context-First Data is the Only Path to AI-Powered Root Cause Analysis
    Mezmo + Catchpoint deliver observability SREs can rely on
    Mezmo’s AI-powered Site Reliability Engineering (SRE) agent for Root Cause Analysis (RCA)
    What is Active Telemetry
    Launching an agentic SRE for root cause analysis
    Paving the way for a new era: Mezmo's Active Telemetry
    The Answer to SRE Agent Failures: Context Engineering
    Empowering an MCP server with a telemetry pipeline
    The Debugging Bottleneck: A Manual Log-Sifting Expedition
    The Smartest Member of Your Developer Ecosystem: Introducing the Mezmo MCP Server
    Your New AI Assistant for a Smarter Workflow
    The Observability Problem Isn't Data Volume Anymore—It's Context
    Beyond the Pipeline: Data Isn't Oil, It's Power.
    The Platform Engineer's Playbook: Mastering OpenTelemetry & Compliance with Mezmo and Dynatrace
    From Alert to Answer in Seconds: Accelerating Incident Response in Dynatrace
    Taming Your Dynatrace Bill: How to Cut Observability Costs, Not Visibility
    Architecting for Value: A Playbook for Sustainable Observability
    How to Cut Observability Costs with Synthetic Monitoring and Responsive Pipelines
    Unlock Deeper Insights: Introducing GitLab Event Integration with Mezmo
    Introducing the New Mezmo Product Homepage
    The Inconvenient Truth About AI Ethics in Observability
    Observability's Moneyball Moment: How AI Is Changing the Game (Not Ending It)
    Do you Grok It?
    Top Five Reasons Telemetry Pipelines Should Be on Every Engineer’s Radar
    Is It a Cup or a Pot? Helping You Pinpoint the Problem—and Sleep Through the Night
    Smarter Telemetry Pipelines: The Key to Cutting Datadog Costs and Observability Chaos
    Why Datadog Falls Short for Log Management and What to Do Instead
    Telemetry for Modern Apps: Reducing MTTR with Smarter Signals
    Transforming Observability: Simpler, Smarter, and More Affordable Data Control
    Datadog: The Good, The Bad, The Costly
    Mezmo Recognized with 25 G2 Awards for Spring 2025
    Reducing Telemetry Toil with Rapid Pipelining
    Cut Costs, Not Insights:   A Practical Guide to Telemetry Data Optimization
    Webinar Recap: Telemetry Pipeline 101
    Petabyte Scale, Gigabyte Costs: Mezmo’s Evolution from ElasticSearch to Quickwit
    2024 Recap - Highlights of Mezmo’s product enhancements
    My Favorite Observability and DevOps Articles of 2024
    AWS re:Invent ‘24: Generative AI Observability, Platform Engineering, and 99.9995% Availability
    From Gartner IOCS 2024 Conference: AI, Observability Data, and Telemetry Pipelines
    Our team’s learnings from Kubecon: Use Exemplars, Configuring OTel, and OTTL cookbook
    How Mezmo Uses a Telemetry Pipeline to Handle Metrics, Part II
    Webinar Recap: 2024 DORA Report: Accelerate State of DevOps
    Kubecon ‘24 recap: Patent Trolls, OTel Lessons at Scale, and Principle Platform Abstractions
    Announcing Mezmo Flow: Build a Telemetry Pipeline in 15 minutes
    Key Takeaways from the 2024 DORA Report
    Webinar Recap | Telemetry Data Management: Tales from the Trenches
    What are SLOs/SLIs/SLAs?
    Webinar Recap | Next Gen Log Management: Maximize Log Value with Telemetry Pipelines
    Creating In-Stream Alerts for Telemetry Data
    Creating Re-Usable Components for Telemetry Pipelines
    Optimizing Data for Service Management Objective Monitoring
    More Value From Your Logs: Next Generation Log Management from Mezmo
    A Day in the Life of a Mezmo SRE
    Webinar Recap: Applying a Data Engineering Approach to Telemetry Data
    Dogfooding at Mezmo: How we used telemetry pipeline to reduce data volume
    Unlocking Business Insights with Telemetry Pipelines
    Why Your Telemetry (Observability) Pipelines Need to be Responsive
    How Data Profiling Can Reduce Burnout
    Data Optimization Technique: Route Data to Specialized Processing Chains
    Data Privacy Takeaways from Gartner Security & Risk Summit
    Mastering Telemetry Pipelines: Driving Compliance and Data Optimization
    A Recap of Gartner Security and Risk Summit: GenAI, Augmented Cybersecurity, Burnout
    Why Telemetry Pipelines Should Be A Part Of Your Compliance Strategy
    Pipeline Module: Event to Metric
    Telemetry Data Compliance Module
    OpenTelemetry: The Key To Unified Telemetry Data
    Data optimization technique: convert events to metrics
    What’s New With Mezmo: In-stream Alerting
    How Mezmo Used Telemetry Pipeline to Handle Metrics
    Webinar Recap: Mastering Telemetry Pipelines - A DevOps Lifecycle Approach to Data Management
    Open-source Telemetry Pipelines: An Overview
    SRECon Recap: Product Reliability, Burn Out, and more
    Webinar Recap: How to Manage Telemetry Data with Confidence
    Webinar Recap: Myths and Realities in Telemetry Data Handling
    Using Vector to Build a Telemetry Pipeline Solution
    Managing Telemetry Data Overflow in Kubernetes with Resource Quotas and Limits
    How To Optimize Telemetry Pipelines For Better Observability and Security
    Gartner IOCS Conference Recap: Monitoring and Observing Environments with Telemetry Pipelines
    AWS re:Invent 2023 highlights: Observability at Stripe, Capital One, and McDonald’s
    Webinar Recap: Best Practices for Observability Pipelines
    Introducing Responsive Pipelines from Mezmo
    My First KubeCon - Tales of the K8’s community, DE&I, sustainability, and OTel
    Modernize Telemetry Pipeline Management with Mezmo Pipeline as Code
    How To Profile and Optimize Telemetry Data: A Deep Dive
    Kubernetes Telemetry Data Optimization in Five Steps with Mezmo
    Introducing Mezmo Edge: A Secure Approach To Telemetry Data
    Understand Kubernetes Telemetry Data Immediately With Mezmo’s Welcome Pipeline
    Unearthing Gold: Deriving Metrics from Logs with Mezmo Telemetry Pipeline
    Webinar Recap: The Single Pane of Glass Myth
    Empower Observability Engineers: Enhance Engineering With Mezmo
    Webinar Recap: How to Get More Out of Your Log Data
    Unraveling the Log Data Explosion: New Market Research Shows Trends and Challenges
    Webinar Recap: Unlocking the Full Value of Telemetry Data
    Data-Driven Decision Making: Leveraging Metrics and Logs-to-Metrics Processors
    How To Configure The Mezmo Telemetry Pipeline
    Supercharge Elasticsearch Observability With Telemetry Pipelines
    Enhancing Grafana Observability With Telemetry Pipelines
    Optimizing Your Splunk Experience with Telemetry Pipelines
    Webinar Recap: Unlocking Business Performance with Telemetry Data
    Enhancing Datadog Observability with Telemetry Pipelines