Site Reliability Engineers
Keep Services Humming
Accelerate Resolution Times
Improve observability by boosting the "signal to noise ratio" of your telemetry data and directing it to the appropriate team. Aligning data formats across platforms enhances collaboration and root cause identification, while data enrichment deepens problem comprehension. With a collaborative team armed with superior data, issue resolution becomes more effective.
Get Deeper Insights
You ensure great digital experiences “24 by 7”. But the data you need to meet your SLOs often hide in terabytes of telemetry data. Mezmo can parse, enrich, and transform that data in motion to help you monitor SLO adherence and make the decisions you need to keep your customers happy and subscribed.
You have more important things to do than calculating compute or storage requirements or performing open-source code maintenance. Mezmo gets you the observability answers you need without the toil. Automatically scale the infrastructure and data retention as your requirements grow. Stay in control of your consumption, with no hidden surprises.
WHAT IS Mezmo TELEMETRY PIPELINE
MEZMO TELEMETRY PIPELINE
Mezmo Telemetry Pipeline helps collect, transform, and route telemetry data to control costs and drive actionability. With Mezmo you can centralize data from various sources via our open platform, apply out-of-the-box and custom processors to transform data, and route it to any observability platforms including Splunk, DataDog, New Relic, Grafana, and Prometheus.
Mezmo empowers Engineering, ITOps, and Security teams to make crucial decisions faster, while keeping costs in-check.
Control data volume and costs by identifying unstructured telemetry data, removing low-value and repetitive data, and using sampling to reduce chatter. Employ intelligent routing rules to send certain data types to low-cost storage.
- Filter: Use the Filter Processor to drop events that may not be meaningful or to reduce the total amount of data forwarded to a subsequent processor or destination.
- Reduce: Take multiple log input events and combine them into a single event based on specified criteria. Use Reduce to combine many events into one over a specified window of time.
- Sample: Send only the events required to understand the data.
- Dedupe: Reduce “chatter” in logs. The overlap of data across fields is the key to having the Dedup Processor work effectively. This processor will emit the first matching record of the set of records that are being compared.
- Route: Intelligently route data to any observability, analytics, or visualization platform.
Increase your data value and optimize data flows by transforming and enriching data. Modify data as needed for compatibility with various end destinations. Enrich and augment data for better context. Scrub sensitive data, or encrypt it to maintain compliance standards.
- Event to Metric: Create a new metric within the pipeline from existing events and log messages.
- Parse: Various parsing options available to create multiple operations such as convert string to integers or parse timestamps.
- Aggregate metrics: Metric data can have more data points than needed to understand the behavior of a system. Remove excess metrics to reduce storage without sacrificing value.
- Encrypt: Use the Encrypt Processor when sending sensitive log data to storage, for example, when retaining log data containing account names and passwords.
With Mezmo you can extract insights before the data reaches high-cost destinations. You can monitor the health of your data pipeline and run various tests before you deploy your solution.
- Use simulation to test your pipelines before you deploy
- Monitor the health of pipelines with OOB dashboards.
- Derive metric data from Logs by parsing the log data to extract specific information.
- Count specific events within the log data and use that count to create a metric.