The Impact Of Kubernetes On The CI/CD Ecosystem

4 MIN READ
MIN READ

Continuous integration (CI) and continuous delivery (CD) are two sides of the same coin we call DevOps. Continuous deployment is the last step of the CD phase where the application is deployed into production. Emerging in the early part of this decade, CI and CD are not new terms to anyone who’s been managing application delivery for a while. Tools like Jenkins have done much to define what a CI/CD pipeline should look like. The key word when talking about CI/CD is ‘automation’. By automating various steps in the development cycle, we start to see improvements in speed, precision, repeatability, and quality. This is done by build automation, test automation, and release automation. Much can be said about each of these phases, as they involve new approaches that are contrary to the old waterfall approach to software delivery. To make this automation possible, it takes multiple tools working together in a deeply integrated manner.

Container Ship

Source: Wikimedia.org

How Kubernetes Affects CI/CD Pipelines

While CI/CD is not new, the advent of Docker containers has left no stone unturned in the world of software. More recently, the rise of Kubernetes within the container ecosystem has impacted the CI/CD process. DevOps requires a shift from the traditional waterfall model of development to a more modern and agile development methodology. Rather than moving code between various VMs in different environments, the same code is now moved across containers, or container clusters as is the case with Kubernetes. Unlike static VMs that are suitable to a more monolithic style of application architecture, containers require a distributed microservices model. This brings new opportunities in terms of elasticity, high availability, and resource utilization. However, rather than relying on old approaches and tools to achieve these advantages, they call for change.

Jenkins - Build & Test Automation

Mention continuous integration, and Jenkins is the first tool that comes to mind. In recent years, Jenkins has focused on going beyond CI and handling the end-to-end development pipeline including the CD phases. Kubernetes has been the solution to this effort. With Kubernetes’ mature handling of resources in production, it’s just the partner Jenkins needs to extend its reach beyond CI. Running Jenkins on Kubernetes brings many benefits. To start, Jenkins can take advantage of the scalability and high availability of Kubernetes. With the numerous worker nodes in Jenkins, handling infrastructure to run Jenkins can become a nightmare. Kubernetes makes this easier with its automatic pod management features.Further, Kubernetes enables zero-downtime updates with Jenkins. This is made possible by the rolling updates feature of Kubernetes where it gradually phases out pods with an older version of the application and replaces them with new ones. It does this by keeping a watch on the number of ‘maxUnavailable’ pods and ensuring they’re enough to run the application at all times during the update. In this way, Kubernetes brings the ability to do canary releases and blue-green deployments to Jenkins. Apart from Jenkins, there are also many new CI tools that are built from a container-first standpoint. These include CircleCI, Travis, CodeFresh, Drone, and Wercker. Many of these tools provide a simpler user experience than Jenkins and are fully managed SaaS solutions. Almost all of them encourage a ‘pipeline’ model to deploying software, and in doing so bring greater control and flexibility in how you manage the CI process. They also feature integrations with all major cloud providers, making them a great alternative to the industry-leading Jenkins.

Spinnaker - Multi-Cloud Deployment

While Jenkins is perfect for the build stages of the pipeline, perhaps an even more complex problem to solve is deployments, especially when it involves multiple cloud platforms and mature deployment practices. Kubernetes has a deployment API which has support for rollout, rollback, and other core deployment functionality. However, another open source tool, Spinnaker, create by Netflix, has been in the spotlight for its advanced deployment controls. Spinnaker focuses on the last mile of the delivery pipeline - deployment in the cloud. It automates deployment processes and cloud resources and acts as a bridge between the source code on say, Github, and the deployment target like a cloud platform. The best part is that Spinnaker supports multiple cloud platforms, and enables a multi-cloud approach to infrastructure. This is one of the original promises of Kubernetes, and is being made accessible to all by Spinnaker. Spinnaker uses pipelines to allow users to control a deployment. These pipelines are deeply customizable. It automatically replaces unhealthy VMs in a cloud platform so you can focus more on defining the required resources for your applications than on maintaining those resources. Spinnaker isn’t a one-stop-solution. In fact, it leverages Jenkins behind the scenes to manage builds, and is built on top of the Kubernetes deployment API adding advanced functionality of its own at every stage. For example, while rollbacks are possible with the Kubernetes API, they’re much faster and easier to execute with Spinnaker. Considering its focus is deployment, it’s no surprise that Spinnaker has first class support for operations like canary releases and blue-green deployments. While Jenkins excels at build automation and initiating automated tests, Spinnaker complements it well by enabling complex deployment strategies. Which tool you choose will depend on your circumstances. Teams that are deeply invested in Jenkins may find it easier to simply better manage Jenkins using Kubernetes. Teams that are looking for a better and easier way to handle deployments than Jenkins would want to give Spinnaker a spin. Either way, Kubernetes will play a role in ensuring that CI/CD pipelines function seamlessly. A CI/CD pipeline management tool is essential as it acts as the control pane for your operations. However, it’s not the only tool you’ll use.

Helm - Package Management

Helm is a package manager for Kubernetes that makes it easy to install applications in Kubernetes. With automation being key to successful CI/CD pipelines, it’s essential to be able to quickly package, share and install application code and its dependencies. Helm has a collection of ‘charts’ with each chart being a package that you can install in Kubernetes. Helm places an agent called Tiller within the Kubernetes cluster which interacts with the Kubernetes API and handles installing and managing of packages. The biggest advantage of Helm is that it brings predictability and repeatability to the CI/CD pipeline. It lets you define and add extensive configurations and metadata for every deployment. Further, it gives you complete control over rollbacks and brings deep visibility into every stage of a deployment.

Trends in CI/CD

Kubernetes is changing CI/CD for the better. By enabling and transforming tools like Jenkins, Spinnaker, and Helm, Kubernetes is ushering in a new way to deploy applications. While the ideas for doing canary releases and blue-green deployments have been around for over a decade, they’re made truly possible with the advances that Kubernetes brings. Here are some of the trends that are emerging because of the influence of Kubernetes.

Pipelines

All CI/CD tools today look at the software delivery cycle as a pipeline with linear steps from start to end. However, pipelines aren’t straightforward and allow for complex changes at every step. The biggest benefit is the ability to abstract the entire process and make it easier to manage. The pipeline model allows a view into how each component depends on others, and view every step in context of the other steps. Previously, each step was disconnected from the other and silos were the norm. Today, with CI/CD tools and Kubernetes, pipelines aren’t just on paper; rather, they are how software delivery happens in practice.

Configuration As Code

Infrastructure previously was controlled by its parent platform. VMware dictated how you interact with VMs and every change has to be made manually, separate from other changes. Today, with tools like Spinnaker and Helm, infrastructure is configured and managed via YAML files. This doesn’t just ease creation of resources but enables better troubleshooting.

Visibility & Control

Previously, version control was restricted to certain parts of the pipeline, but with Kubernetes, version control is built-in as every change is recorded and versioned and can be retrieved or rolled back to if needed. Speaking about visibility, monitoring becomes more comprehensive with capable tools that are built to easily handle the scale and nuances of a Kubernetes-driven process. Tools like Prometheus and Heapster are great at delivering a stream of real-time metrics. Additionally, logging tools like LogDNA help to capture the minute details about every deployment. This includes logging exceptions, errors, states, and events.

Multi-Cloud Support

CI/CD tools today need to support multiple cloud platforms. Not that the same app would be run on multiple cloud platforms, but in a large organization different teams and different apps would use various platforms to meet specific needs. A modern CI/CD tool needs to cater to the needs of diverse teams and applications and this means supporting all major cloud platforms and private data centers as well.

Conclusion

Kubernetes has changed how software is built and shipped. What began with the cloud computing movement and CI tools like Jenkins about a decade ago is now coming of age with Kubernetes. What’s amazing is that these technologies are not being adopted by startups or fringe organizations, but rather by mainstream large enterprises who are looking for a way to modernize their application stack and infrastructure stack. They’re looking for solutions to real-world problems they face. If these CI/CD solutions tell us anything, it’s that Kubernetes is delivering where it really matters, and this is making CI/CD become a reality in many organizations. It’s about time, after all. Learn more about CI/CD with LogDNA!

Table of Contents

    Share Article

    RSS Feed

    Next blog post
    You're viewing our latest blog post.
    Previous blog post
    You're viewing our oldest blog post.
    Mezmo + Catchpoint deliver observability SREs can rely on
    Mezmo’s AI-powered Site Reliability Engineering (SRE) agent for Root Cause Analysis (RCA)
    What is Active Telemetry
    Launching an agentic SRE for root cause analysis
    Paving the way for a new era: Mezmo's Active Telemetry
    The Answer to SRE Agent Failures: Context Engineering
    Empowering an MCP server with a telemetry pipeline
    The Debugging Bottleneck: A Manual Log-Sifting Expedition
    The Smartest Member of Your Developer Ecosystem: Introducing the Mezmo MCP Server
    Your New AI Assistant for a Smarter Workflow
    The Observability Problem Isn't Data Volume Anymore—It's Context
    Beyond the Pipeline: Data Isn't Oil, It's Power.
    The Platform Engineer's Playbook: Mastering OpenTelemetry & Compliance with Mezmo and Dynatrace
    From Alert to Answer in Seconds: Accelerating Incident Response in Dynatrace
    Taming Your Dynatrace Bill: How to Cut Observability Costs, Not Visibility
    Architecting for Value: A Playbook for Sustainable Observability
    How to Cut Observability Costs with Synthetic Monitoring and Responsive Pipelines
    Unlock Deeper Insights: Introducing GitLab Event Integration with Mezmo
    Introducing the New Mezmo Product Homepage
    The Inconvenient Truth About AI Ethics in Observability
    Observability's Moneyball Moment: How AI Is Changing the Game (Not Ending It)
    Do you Grok It?
    Top Five Reasons Telemetry Pipelines Should Be on Every Engineer’s Radar
    Is It a Cup or a Pot? Helping You Pinpoint the Problem—and Sleep Through the Night
    Smarter Telemetry Pipelines: The Key to Cutting Datadog Costs and Observability Chaos
    Why Datadog Falls Short for Log Management and What to Do Instead
    Telemetry for Modern Apps: Reducing MTTR with Smarter Signals
    Transforming Observability: Simpler, Smarter, and More Affordable Data Control
    Datadog: The Good, The Bad, The Costly
    Mezmo Recognized with 25 G2 Awards for Spring 2025
    Reducing Telemetry Toil with Rapid Pipelining
    Cut Costs, Not Insights:   A Practical Guide to Telemetry Data Optimization
    Webinar Recap: Telemetry Pipeline 101
    Petabyte Scale, Gigabyte Costs: Mezmo’s Evolution from ElasticSearch to Quickwit
    2024 Recap - Highlights of Mezmo’s product enhancements
    My Favorite Observability and DevOps Articles of 2024
    AWS re:Invent ‘24: Generative AI Observability, Platform Engineering, and 99.9995% Availability
    From Gartner IOCS 2024 Conference: AI, Observability Data, and Telemetry Pipelines
    Our team’s learnings from Kubecon: Use Exemplars, Configuring OTel, and OTTL cookbook
    How Mezmo Uses a Telemetry Pipeline to Handle Metrics, Part II
    Webinar Recap: 2024 DORA Report: Accelerate State of DevOps
    Kubecon ‘24 recap: Patent Trolls, OTel Lessons at Scale, and Principle Platform Abstractions
    Announcing Mezmo Flow: Build a Telemetry Pipeline in 15 minutes
    Key Takeaways from the 2024 DORA Report
    Webinar Recap | Telemetry Data Management: Tales from the Trenches
    What are SLOs/SLIs/SLAs?
    Webinar Recap | Next Gen Log Management: Maximize Log Value with Telemetry Pipelines
    Creating In-Stream Alerts for Telemetry Data
    Creating Re-Usable Components for Telemetry Pipelines
    Optimizing Data for Service Management Objective Monitoring
    More Value From Your Logs: Next Generation Log Management from Mezmo
    A Day in the Life of a Mezmo SRE
    Webinar Recap: Applying a Data Engineering Approach to Telemetry Data
    Dogfooding at Mezmo: How we used telemetry pipeline to reduce data volume
    Unlocking Business Insights with Telemetry Pipelines
    Why Your Telemetry (Observability) Pipelines Need to be Responsive
    How Data Profiling Can Reduce Burnout
    Data Optimization Technique: Route Data to Specialized Processing Chains
    Data Privacy Takeaways from Gartner Security & Risk Summit
    Mastering Telemetry Pipelines: Driving Compliance and Data Optimization
    A Recap of Gartner Security and Risk Summit: GenAI, Augmented Cybersecurity, Burnout
    Why Telemetry Pipelines Should Be A Part Of Your Compliance Strategy
    Pipeline Module: Event to Metric
    Telemetry Data Compliance Module
    OpenTelemetry: The Key To Unified Telemetry Data
    Data optimization technique: convert events to metrics
    What’s New With Mezmo: In-stream Alerting
    How Mezmo Used Telemetry Pipeline to Handle Metrics
    Webinar Recap: Mastering Telemetry Pipelines - A DevOps Lifecycle Approach to Data Management
    Open-source Telemetry Pipelines: An Overview
    SRECon Recap: Product Reliability, Burn Out, and more
    Webinar Recap: How to Manage Telemetry Data with Confidence
    Webinar Recap: Myths and Realities in Telemetry Data Handling
    Using Vector to Build a Telemetry Pipeline Solution
    Managing Telemetry Data Overflow in Kubernetes with Resource Quotas and Limits
    How To Optimize Telemetry Pipelines For Better Observability and Security
    Gartner IOCS Conference Recap: Monitoring and Observing Environments with Telemetry Pipelines
    AWS re:Invent 2023 highlights: Observability at Stripe, Capital One, and McDonald’s
    Webinar Recap: Best Practices for Observability Pipelines
    Introducing Responsive Pipelines from Mezmo
    My First KubeCon - Tales of the K8’s community, DE&I, sustainability, and OTel
    Modernize Telemetry Pipeline Management with Mezmo Pipeline as Code
    How To Profile and Optimize Telemetry Data: A Deep Dive
    Kubernetes Telemetry Data Optimization in Five Steps with Mezmo
    Introducing Mezmo Edge: A Secure Approach To Telemetry Data
    Understand Kubernetes Telemetry Data Immediately With Mezmo’s Welcome Pipeline
    Unearthing Gold: Deriving Metrics from Logs with Mezmo Telemetry Pipeline
    Webinar Recap: The Single Pane of Glass Myth
    Empower Observability Engineers: Enhance Engineering With Mezmo
    Webinar Recap: How to Get More Out of Your Log Data
    Unraveling the Log Data Explosion: New Market Research Shows Trends and Challenges
    Webinar Recap: Unlocking the Full Value of Telemetry Data
    Data-Driven Decision Making: Leveraging Metrics and Logs-to-Metrics Processors
    How To Configure The Mezmo Telemetry Pipeline
    Supercharge Elasticsearch Observability With Telemetry Pipelines
    Enhancing Grafana Observability With Telemetry Pipelines
    Optimizing Your Splunk Experience with Telemetry Pipelines
    Webinar Recap: Unlocking Business Performance with Telemetry Data
    Enhancing Datadog Observability with Telemetry Pipelines
    Transforming Your Data With Telemetry Pipelines