Understand Kubernetes Telemetry Data Immediately With Mezmo’s Welcome Pipeline

4 MIN READ
5 MIN READ

Most vendor trials take quite a bit of effort and time. Now, with Mezmo’s new Welcome Pipeline, you can get results with your Kubernetes telemetry data in just a couple of minutes. But first, let’s discuss why Kubernetes data is such a challenge, and then we’ll overview the steps.

Kubernetes and business insights

Kubernetes has become a staple in orchestrating containerized applications. Its robustness makes it excellent for scaling and managing complex systems. However, despite its utility, Kubernetes-deployed applications and infrastructure generate a ton of data. While this data can reveal important key performance indicators (KPIs) like system performance, latency, and user behavior metrics, getting those insights is often tricky due to the volume and complexity of Kubernetes' telemetry data. Extracting insights from this data often means sifting through verbose logs, indecipherable telemetry data, and a jungle of metrics.

The challenges

  • Data Overload: With Kubernetes, you're never short of data. But too much data can be overwhelming, making it hard to find the information you actually need. A good example of this is finding yourself buried in log data, system metrics, and telemetry feedback when all you wanted were key performance indicators (KPIs). Additionally, this high volume of data can increase costs. 
  • Complexity: Kubernetes is not a monolithic platform. With multiple microservices running, linking data to actual business metrics becomes cumbersome. Tracking a single-user interaction across multiple services would be like finding a needle in a haystack.
  • Lack of User-Friendly Tools: Many solutions either provide too much or too little, requiring a steep learning curve or leaving you wanting more. You may encounter tools that flood you with raw data dumps or others that offer an inadequate snapshot— with neither helping you make quick, informed decisions.

Simplify kubernetes observability with our new welcome pipeline

Figure 1: Welcome Pipeline

Our telemetry pipeline stands as the solution to these issues, presenting a way to harness the raw power of Kubernetes data, delivering actionable insights without the complexities. Our new Welcome Pipeline allows you to see the power, versatility, and potential yourself.  

Quick and easy setup

With our new Welcome Pipeline, you can start pulling insights in five minutes or less. Just connect your Kubernetes cluster and configure a few settings. It's that simple.

Use the tools you know

There is no need to learn (or spend on) a new interface; our pipeline works with popular data observability tools you're likely already using and invested in, like Grafana or Datadog.

Core value: actionable insights

Our pipeline focuses on what matters: delivering insights that help with business decision-making. We do the heavy lifting on the data side so you can concentrate on making informed decisions.

You choose how to visualize

We don't lock you into using our visualization tools. Use the ones you're comfortable with and tailor your visualizations to your unique business needs.

Rapid feature releases

We focus on delivering new features quickly, specifically features that directly enhance Kubernetes observability. No waiting around for months for crucial updates while we build out a cumbersome UI.

Getting started: setting up your welcome pipeline

Figure 2: Mezmo Free Trial Sign-Up Page

Head to Mezmo's sign-up page and start your journey by filling out the sign-up form and checking your email address to verify and access your new account.

Figure 3: Mezmo Welcome Pipeline Onboarding, Organization Tab

Once you’ve accessed your account, you should be at the onboarding screen. From here, you can quickly set up your pipeline: 

  • Organization: Begin by naming your organization.
  • Deploy Collector: Follow the three steps within the onboarding wizard to either install a new collector or configure your existing one.
  • View Observability Pipeline: See the pipeline you just configured.

Understanding your Telemetry Pipeline


Congratulations! At this point, you’ve set up your pipeline and are ready to go. Above (in Figure 1) is what your pipeline would look like with your data. Let’s dive into the components. 

We have three major functional areas outlined in our example above that serve as a way for you to learn about the processes you can employ within the pipeline. 

Counting log lines

Figure 4: Differentiating Kubernetes Logs and Creating Metrics

In the first section, we focus on the volume of logs coming from our Kubernetes environment.  To do this, we create two different metrics:

  • node_entry: the number of log entries being produced by each Kubernetes node in our cluster 
  • log_monitoring: the number of log entries being produced by each container running in our Kubernetes cluster.

In the case of the node_entry metric, we also feed that into an Aggregate processor, which is configured to summarize the metric only every 60 seconds.

Health sentiment

Figure 5: Extracting Health Sentiments

 

The second section, stemming from the Route processor that adds metric counters, is focused on extracting health sentiments from your log data. Typically, all these metrics hide in your log data. Mezmo makes it easy to extract and understand these health sentiments so you can have a comprehensive view of the health of your Kubernetes system at a glance. For example, in this pipeline, we extract the following health metrics:  

  • Errors
  • Negative Sentiments
  • Exceptions
  • Out-of-Memory Conditions
  • Keyword Filtering 

Telemetry enrichment

Figure 6: The “Enrich Ops Tags” processor

The final section sends all of our telemetry data through an enrichment processor that, as a best practice, adds helpful tags to your telemetry data to trace it back to where it was processed.  As such, all telemetry flowing through this pipeline will include additional tagging information that will appear in your observability tools to help teams understand where the data came from, what pipeline(s) it flowed through, and most importantly- where they can go to see the pipeline definition:

Figure 7: Additional tagging information via the enrichment processor

These processors are preconfigured and will work with any setup you’ve provided in the onboarding.  

You can route your data to virtually any observability platform from here. In Figure 1, we highlighted a potential logs consumer and metrics consumer as two examples, but you can replace the above destinations with examples like: 

  • Grafana: You can feed Grafana the data from Mezmo to construct intuitive dashboards for trend analysis, pinpointing performance hotspots and bottlenecks within your system.
  • Datadog: Route data from Mezmo to Datadog for enhanced anomaly detection and streamlined alert management. 
  • Prometheus Write Endpoint: Channeling telemetry data from Mezmo directly to the Prometheus Write Endpoint ensures real-time data ingestion and swift alert capabilities. You’ll also benefit from instant metric updates and proactive system health checks. 

Figure 8: Grafana Dashboard w/ Heatmap to Easily Visualize the Data Process in the Demo Pipeline

Mezmo offers a custom Grafana dashboard to easily view the data within your pipeline. To import this dashboard inside your Grafana instance, follow these instructions: 

  1. Go to https://grafana.com/grafana/dashboards/
  2. Enter “Mezmo” in the search bar. 
  3. Click on the “Mezmo Welcome Dashboard.”
  4. Click the “Copy ID to Clipboard” Button
  5. Within your Grafana instance, click on “Dashboards” and then “Import”
  6. Paste the Dashboard ID and click “Load”
  7. Select your Prometheus instance where the metrics are being pulled from using the dropdown menu. 
  8. Click “Import”

Take the next step

Understanding our telemetry pipeline's power, versatility, and value comes with hands-on experience. Simplify your data processes, attain actionable insights, and make better decisions, all while requiring less than five minutes of setup time. 

Get started for free today or talk to a member of our team to see the difference firsthand.

Table of Contents

    Share Article

    RSS Feed

    Next blog post
    You're viewing our latest blog post.
    Previous blog post
    You're viewing our oldest blog post.
    Mezmo + Catchpoint deliver observability SREs can rely on
    Mezmo’s AI-powered Site Reliability Engineering (SRE) agent for Root Cause Analysis (RCA)
    What is Active Telemetry
    Launching an agentic SRE for root cause analysis
    Paving the way for a new era: Mezmo's Active Telemetry
    The Answer to SRE Agent Failures: Context Engineering
    Empowering an MCP server with a telemetry pipeline
    The Debugging Bottleneck: A Manual Log-Sifting Expedition
    The Smartest Member of Your Developer Ecosystem: Introducing the Mezmo MCP Server
    Your New AI Assistant for a Smarter Workflow
    The Observability Problem Isn't Data Volume Anymore—It's Context
    Beyond the Pipeline: Data Isn't Oil, It's Power.
    The Platform Engineer's Playbook: Mastering OpenTelemetry & Compliance with Mezmo and Dynatrace
    From Alert to Answer in Seconds: Accelerating Incident Response in Dynatrace
    Taming Your Dynatrace Bill: How to Cut Observability Costs, Not Visibility
    Architecting for Value: A Playbook for Sustainable Observability
    How to Cut Observability Costs with Synthetic Monitoring and Responsive Pipelines
    Unlock Deeper Insights: Introducing GitLab Event Integration with Mezmo
    Introducing the New Mezmo Product Homepage
    The Inconvenient Truth About AI Ethics in Observability
    Observability's Moneyball Moment: How AI Is Changing the Game (Not Ending It)
    Do you Grok It?
    Top Five Reasons Telemetry Pipelines Should Be on Every Engineer’s Radar
    Is It a Cup or a Pot? Helping You Pinpoint the Problem—and Sleep Through the Night
    Smarter Telemetry Pipelines: The Key to Cutting Datadog Costs and Observability Chaos
    Why Datadog Falls Short for Log Management and What to Do Instead
    Telemetry for Modern Apps: Reducing MTTR with Smarter Signals
    Transforming Observability: Simpler, Smarter, and More Affordable Data Control
    Datadog: The Good, The Bad, The Costly
    Mezmo Recognized with 25 G2 Awards for Spring 2025
    Reducing Telemetry Toil with Rapid Pipelining
    Cut Costs, Not Insights:   A Practical Guide to Telemetry Data Optimization
    Webinar Recap: Telemetry Pipeline 101
    Petabyte Scale, Gigabyte Costs: Mezmo’s Evolution from ElasticSearch to Quickwit
    2024 Recap - Highlights of Mezmo’s product enhancements
    My Favorite Observability and DevOps Articles of 2024
    AWS re:Invent ‘24: Generative AI Observability, Platform Engineering, and 99.9995% Availability
    From Gartner IOCS 2024 Conference: AI, Observability Data, and Telemetry Pipelines
    Our team’s learnings from Kubecon: Use Exemplars, Configuring OTel, and OTTL cookbook
    How Mezmo Uses a Telemetry Pipeline to Handle Metrics, Part II
    Webinar Recap: 2024 DORA Report: Accelerate State of DevOps
    Kubecon ‘24 recap: Patent Trolls, OTel Lessons at Scale, and Principle Platform Abstractions
    Announcing Mezmo Flow: Build a Telemetry Pipeline in 15 minutes
    Key Takeaways from the 2024 DORA Report
    Webinar Recap | Telemetry Data Management: Tales from the Trenches
    What are SLOs/SLIs/SLAs?
    Webinar Recap | Next Gen Log Management: Maximize Log Value with Telemetry Pipelines
    Creating In-Stream Alerts for Telemetry Data
    Creating Re-Usable Components for Telemetry Pipelines
    Optimizing Data for Service Management Objective Monitoring
    More Value From Your Logs: Next Generation Log Management from Mezmo
    A Day in the Life of a Mezmo SRE
    Webinar Recap: Applying a Data Engineering Approach to Telemetry Data
    Dogfooding at Mezmo: How we used telemetry pipeline to reduce data volume
    Unlocking Business Insights with Telemetry Pipelines
    Why Your Telemetry (Observability) Pipelines Need to be Responsive
    How Data Profiling Can Reduce Burnout
    Data Optimization Technique: Route Data to Specialized Processing Chains
    Data Privacy Takeaways from Gartner Security & Risk Summit
    Mastering Telemetry Pipelines: Driving Compliance and Data Optimization
    A Recap of Gartner Security and Risk Summit: GenAI, Augmented Cybersecurity, Burnout
    Why Telemetry Pipelines Should Be A Part Of Your Compliance Strategy
    Pipeline Module: Event to Metric
    Telemetry Data Compliance Module
    OpenTelemetry: The Key To Unified Telemetry Data
    Data optimization technique: convert events to metrics
    What’s New With Mezmo: In-stream Alerting
    How Mezmo Used Telemetry Pipeline to Handle Metrics
    Webinar Recap: Mastering Telemetry Pipelines - A DevOps Lifecycle Approach to Data Management
    Open-source Telemetry Pipelines: An Overview
    SRECon Recap: Product Reliability, Burn Out, and more
    Webinar Recap: How to Manage Telemetry Data with Confidence
    Webinar Recap: Myths and Realities in Telemetry Data Handling
    Using Vector to Build a Telemetry Pipeline Solution
    Managing Telemetry Data Overflow in Kubernetes with Resource Quotas and Limits
    How To Optimize Telemetry Pipelines For Better Observability and Security
    Gartner IOCS Conference Recap: Monitoring and Observing Environments with Telemetry Pipelines
    AWS re:Invent 2023 highlights: Observability at Stripe, Capital One, and McDonald’s
    Webinar Recap: Best Practices for Observability Pipelines
    Introducing Responsive Pipelines from Mezmo
    My First KubeCon - Tales of the K8’s community, DE&I, sustainability, and OTel
    Modernize Telemetry Pipeline Management with Mezmo Pipeline as Code
    How To Profile and Optimize Telemetry Data: A Deep Dive
    Kubernetes Telemetry Data Optimization in Five Steps with Mezmo
    Introducing Mezmo Edge: A Secure Approach To Telemetry Data
    Understand Kubernetes Telemetry Data Immediately With Mezmo’s Welcome Pipeline
    Unearthing Gold: Deriving Metrics from Logs with Mezmo Telemetry Pipeline
    Webinar Recap: The Single Pane of Glass Myth
    Empower Observability Engineers: Enhance Engineering With Mezmo
    Webinar Recap: How to Get More Out of Your Log Data
    Unraveling the Log Data Explosion: New Market Research Shows Trends and Challenges
    Webinar Recap: Unlocking the Full Value of Telemetry Data
    Data-Driven Decision Making: Leveraging Metrics and Logs-to-Metrics Processors
    How To Configure The Mezmo Telemetry Pipeline
    Supercharge Elasticsearch Observability With Telemetry Pipelines
    Enhancing Grafana Observability With Telemetry Pipelines
    Optimizing Your Splunk Experience with Telemetry Pipelines
    Webinar Recap: Unlocking Business Performance with Telemetry Data
    Enhancing Datadog Observability with Telemetry Pipelines
    Transforming Your Data With Telemetry Pipelines