LogDNA and CI/CD

4 MIN READ
MIN READ


                                                                                              Source: https://opsani.com/resources/what-is-ci-cd/



After reading this article, you will be able to answer the following questions:

  • What is CI/CD? 
  • How can centralized logging help me better understand my CI/CD?
  • Which monitors and metrics can help me with logging implementation? 

Continuous Integration/Continuous Delivery

What Is CI/CD?

As an engineer at any organization, you will encounter CI/CD pipelines. These pipelines enable applications to be updated by deploying the latest code changes which are then manually or automatically deployed to all of your environments. The “CI” in CI/CD stands for Continuous Integration; put simply, this is the practice of automating the integration of code changes from multiple contributors into a single project (which is typically a shared GitHub repository). The “CD” component stands for Continuous Delivery, which equates to high degrees of automation that can help push code into production faster than ever. CI/CD is designed to require less human intervention, since there’s a chance of human error each time humans are involved in a process. By using CI/CD, you can ensure that you have more secure and more efficient pipelines that can be frequently changed or updated on the fly. In order for this to run smoothly and not break everything in sight, testing must be part of your release cycle.  

CI/CD Tooling

Even if you are not a “DevOps engineer,” you will still interact with your organization's CI/CD. Fortunately, there are many tools that can help you. For example, your organization may be using applications like:

CI/CD pipeline tools enable you to automate your software delivery process. Without them, you would spend most days running several commands in order to apply your code changes to the applications that your company manages.

When using your CI/CD tool of choice, it's important to remember that these systems are predominantly gated. This means that a feature or release candidate is first released into a lower environment, then run through testing. Finally, a team will either deny or approve the process for deployment into the next environment until it reaches production.

Centralized Logging

Centralized Logging with Mezmo

Mezmo, formerly LogDNA, is different from other log management tools in that it aggregates all system and application logs into one centralized system while also using automatic parsing and intelligent filters to efficiently query the log lines you need.

How to Set up Centralized Logging with Mezmo

Mezmo is very simple and fast to set up. You’ll be able to gather, monitor, alert, parse, live tail, graph, and analyze your logs is a few simple steps:

  1. Set up an account here.
  2. Select your preferred log ingestion method.
  3. Search your logs.
  4. Create Views and attach Alerts.
  5. Create Boards and Graphs.

For a speedy setup, check out the Quick Start Guide. Below, we’ll dive into each of these steps in a bit more detail.

Set up an Account

You can set up an account simply by going to the Mezmo sign up page. You will automatically be enrolled in a 14-day free trial.  After you’ve set up your account, you’ll be asked for your organization's name. Then, you will have a few different methods for starting your log ingestion. 

Log Ingestion Methods

After you have set up your account, you’ll need to decide which log ingestion method will work best for you. You’ll be provided with an ingestion key as a way to connect, and you’ll have several options:

  • Use the provided ingestion key
  • Install the Agent
  • Install the integrations for your platform(s)
  • Use Syslog
  • Use code libraries



CI/CD

To show you how it all works, we will use CircleCI as our CI/CD tooling of choice. From the UI, we can see that the build has failed. We don’t quite know why it failed, so let's set up logging using Mezmo to dig deeper. 



Setting up Mezmo for CircleCI Tool

You’ll need to have version 2.1 of CircleCI, so be sure to check your config file and update if necessary. From here, you can search for the Mezmo Orb. If you’d like more detail before we begin, you can review this page

What Is an Orb?

An orb is a sharable package of CircleCI configuration that you can use for your builds. You can choose from a public registry or you can create a private orb that can better suit your needs.



Let's set this up so that we can be notified of the status of a build event from our CI/CD pipeline. We’ll need some information from Mezmo to set this up. 



You will need to update your configuration file with the code snippets outlined in the next step. 


Search Your Logs

After you have successfully set up Mezmo for CircleCI, you'll be able to view and search your logs with ease. For example:


Create Views for Your Error Logs

Setting up views with your error logs is super easy! Under the “Levels” tab, you can sort by the log level. If you only select the “Error” logs, you’ll create a view that will only display your error logs.



After selecting the log level error, you can create a new view so that you only see the error logs. 


Using Logging for Your CI/CD

Managing and Measuring Your CI/CD 

Now that you’ve (probably) interacted with a CI/CD tool, let’s see how we can make it better. More specifically, let’s see how we can better measure what success looks like from the CI/CD pipeline.

One of the most important things that centralized logging can do for your pipeline is to give you the ability to watch changes being applied to data as it flows in each environment, from development, to non-production, to production environments. 


If you don’t have logging set up for your CI/CD pipeline, you won’t be able to see how changes affect the environment, and it will be extremely difficult to know how to move forward on the path to a successful production release. Without logs, you won’t be able to understand the effects of your code as it moves through the pipeline, which will prevent your team from being able to confirm that your changes are acceptable to move to the next environment in your stack.

Conclusion

You probably don’t have time to watch a deployment move through your CI/CD pipeline. A logging system will allow you to see what happened at a specific time that triggered a specific alert on lower environments (“lower" meaning earlier in the pipeline). This level of insight can be a tremendous help when it comes to understanding your workflow and finding the bottlenecks in your pipeline process.


Table of Contents

    Share Article

    RSS Feed

    Next blog post
    You're viewing our latest blog post.
    Previous blog post
    You're viewing our oldest blog post.
    The Observability Stack is Collapsing: Why Context-First Data is the Only Path to AI-Powered Root Cause Analysis
    Mezmo + Catchpoint deliver observability SREs can rely on
    Mezmo’s AI-powered Site Reliability Engineering (SRE) agent for Root Cause Analysis (RCA)
    What is Active Telemetry
    Launching an agentic SRE for root cause analysis
    Paving the way for a new era: Mezmo's Active Telemetry
    The Answer to SRE Agent Failures: Context Engineering
    Empowering an MCP server with a telemetry pipeline
    The Debugging Bottleneck: A Manual Log-Sifting Expedition
    The Smartest Member of Your Developer Ecosystem: Introducing the Mezmo MCP Server
    Your New AI Assistant for a Smarter Workflow
    The Observability Problem Isn't Data Volume Anymore—It's Context
    Beyond the Pipeline: Data Isn't Oil, It's Power.
    The Platform Engineer's Playbook: Mastering OpenTelemetry & Compliance with Mezmo and Dynatrace
    From Alert to Answer in Seconds: Accelerating Incident Response in Dynatrace
    Taming Your Dynatrace Bill: How to Cut Observability Costs, Not Visibility
    Architecting for Value: A Playbook for Sustainable Observability
    How to Cut Observability Costs with Synthetic Monitoring and Responsive Pipelines
    Unlock Deeper Insights: Introducing GitLab Event Integration with Mezmo
    Introducing the New Mezmo Product Homepage
    The Inconvenient Truth About AI Ethics in Observability
    Observability's Moneyball Moment: How AI Is Changing the Game (Not Ending It)
    Do you Grok It?
    Top Five Reasons Telemetry Pipelines Should Be on Every Engineer’s Radar
    Is It a Cup or a Pot? Helping You Pinpoint the Problem—and Sleep Through the Night
    Smarter Telemetry Pipelines: The Key to Cutting Datadog Costs and Observability Chaos
    Why Datadog Falls Short for Log Management and What to Do Instead
    Telemetry for Modern Apps: Reducing MTTR with Smarter Signals
    Transforming Observability: Simpler, Smarter, and More Affordable Data Control
    Datadog: The Good, The Bad, The Costly
    Mezmo Recognized with 25 G2 Awards for Spring 2025
    Reducing Telemetry Toil with Rapid Pipelining
    Cut Costs, Not Insights:   A Practical Guide to Telemetry Data Optimization
    Webinar Recap: Telemetry Pipeline 101
    Petabyte Scale, Gigabyte Costs: Mezmo’s Evolution from ElasticSearch to Quickwit
    2024 Recap - Highlights of Mezmo’s product enhancements
    My Favorite Observability and DevOps Articles of 2024
    AWS re:Invent ‘24: Generative AI Observability, Platform Engineering, and 99.9995% Availability
    From Gartner IOCS 2024 Conference: AI, Observability Data, and Telemetry Pipelines
    Our team’s learnings from Kubecon: Use Exemplars, Configuring OTel, and OTTL cookbook
    How Mezmo Uses a Telemetry Pipeline to Handle Metrics, Part II
    Webinar Recap: 2024 DORA Report: Accelerate State of DevOps
    Kubecon ‘24 recap: Patent Trolls, OTel Lessons at Scale, and Principle Platform Abstractions
    Announcing Mezmo Flow: Build a Telemetry Pipeline in 15 minutes
    Key Takeaways from the 2024 DORA Report
    Webinar Recap | Telemetry Data Management: Tales from the Trenches
    What are SLOs/SLIs/SLAs?
    Webinar Recap | Next Gen Log Management: Maximize Log Value with Telemetry Pipelines
    Creating In-Stream Alerts for Telemetry Data
    Creating Re-Usable Components for Telemetry Pipelines
    Optimizing Data for Service Management Objective Monitoring
    More Value From Your Logs: Next Generation Log Management from Mezmo
    A Day in the Life of a Mezmo SRE
    Webinar Recap: Applying a Data Engineering Approach to Telemetry Data
    Dogfooding at Mezmo: How we used telemetry pipeline to reduce data volume
    Unlocking Business Insights with Telemetry Pipelines
    Why Your Telemetry (Observability) Pipelines Need to be Responsive
    How Data Profiling Can Reduce Burnout
    Data Optimization Technique: Route Data to Specialized Processing Chains
    Data Privacy Takeaways from Gartner Security & Risk Summit
    Mastering Telemetry Pipelines: Driving Compliance and Data Optimization
    A Recap of Gartner Security and Risk Summit: GenAI, Augmented Cybersecurity, Burnout
    Why Telemetry Pipelines Should Be A Part Of Your Compliance Strategy
    Pipeline Module: Event to Metric
    Telemetry Data Compliance Module
    OpenTelemetry: The Key To Unified Telemetry Data
    Data optimization technique: convert events to metrics
    What’s New With Mezmo: In-stream Alerting
    How Mezmo Used Telemetry Pipeline to Handle Metrics
    Webinar Recap: Mastering Telemetry Pipelines - A DevOps Lifecycle Approach to Data Management
    Open-source Telemetry Pipelines: An Overview
    SRECon Recap: Product Reliability, Burn Out, and more
    Webinar Recap: How to Manage Telemetry Data with Confidence
    Webinar Recap: Myths and Realities in Telemetry Data Handling
    Using Vector to Build a Telemetry Pipeline Solution
    Managing Telemetry Data Overflow in Kubernetes with Resource Quotas and Limits
    How To Optimize Telemetry Pipelines For Better Observability and Security
    Gartner IOCS Conference Recap: Monitoring and Observing Environments with Telemetry Pipelines
    AWS re:Invent 2023 highlights: Observability at Stripe, Capital One, and McDonald’s
    Webinar Recap: Best Practices for Observability Pipelines
    Introducing Responsive Pipelines from Mezmo
    My First KubeCon - Tales of the K8’s community, DE&I, sustainability, and OTel
    Modernize Telemetry Pipeline Management with Mezmo Pipeline as Code
    How To Profile and Optimize Telemetry Data: A Deep Dive
    Kubernetes Telemetry Data Optimization in Five Steps with Mezmo
    Introducing Mezmo Edge: A Secure Approach To Telemetry Data
    Understand Kubernetes Telemetry Data Immediately With Mezmo’s Welcome Pipeline
    Unearthing Gold: Deriving Metrics from Logs with Mezmo Telemetry Pipeline
    Webinar Recap: The Single Pane of Glass Myth
    Empower Observability Engineers: Enhance Engineering With Mezmo
    Webinar Recap: How to Get More Out of Your Log Data
    Unraveling the Log Data Explosion: New Market Research Shows Trends and Challenges
    Webinar Recap: Unlocking the Full Value of Telemetry Data
    Data-Driven Decision Making: Leveraging Metrics and Logs-to-Metrics Processors
    How To Configure The Mezmo Telemetry Pipeline
    Supercharge Elasticsearch Observability With Telemetry Pipelines
    Enhancing Grafana Observability With Telemetry Pipelines
    Optimizing Your Splunk Experience with Telemetry Pipelines
    Webinar Recap: Unlocking Business Performance with Telemetry Data
    Enhancing Datadog Observability with Telemetry Pipelines