A Blueprint for Running Stateful Services on Kubernetes
Managing stateful applications has been challenging for engineering and operations teams long before the debut of Kubernetes. In this post, we’ll explore all aspects of your deployments of stateful applications on Kubernetes, from the underlying hardware to Pod update strategies, and provide insights into how Mezmo, formerly known as LogDNA, uses stateful Kubernetes to build one of the world’s fastest log management platforms.
First, we’ll describe what “stateful” and “state” means in a cloud-native context.
“State” refers to the condition that an application is in at a particular point in time. A stateful application changes its behavior based on previous transactions; in other words, it maintains a memory of the past. Examples of stateful applications include databases, caches, and content management systems (CMS) such as WordPress. With stateful applications, the application must have a location where it can store its state as data. This data needs to be available to the application throughout its lifespan. In a basic single-server, single-instance application, this could be as easy as storing data directly on the host filesystem.
However, when scaling an application to multiple instances and nodes, several operational challenges arise. For example, each instance of the application must either maintain its own set of data independently of others, or access the same data concurrently with other instances. With Kubernetes, this is in opposition to the idea of container ephemerality. Containers are designed to be replaceable and reschedulable throughout a cluster, but when a container has data associated with it, that data must be attached to the container on deployment. Kubernetes supports this behavior, but it requires additional configuration steps.
The effectiveness with which you can run stateful services on Kubernetes begins with your infrastructure. Cluster design impacts everything from performance to reliability. Things to consider are whether your cluster is managed or unmanaged, the hardware that your nodes run on, the type of storage that you use to store application state and multicloud deployments and data residency.
Once you’ve designed and optimized your infrastructure, the next step is to deploy your stateful services to Kubernetes. Recent versions of Kubernetes provide native API objects that make this process easier, specifically StatefulSets. StatefulSets are useful for managing the deployment of stateful Pods, much like Deployments for stateless Pods. StatefulSets provide additional functionality to aid in managing stateful applications, including:
- Assigning a persistent, unique, and ordered identifier to each Pod.
- Enabling internal DNS lookup per-pod via a native service.
- Deploying, updating, and terminating Pods in sequential order.
- Dynamically provisioning disk via volumeClaimTemplates.
Each Pod in a StatefulSet is assigned an ordinal index, which is a “0-indexed” integer value that acts as both a unique identifier and order in which the Pod is deployed, updated, or terminated. This index — along with the StatefulSet name — is also used to create a unique network identity. Mezmo uses this for both service discovery and for load balancing requests across multiple pods.
Kubernetes might not have been originally designed for stateful applications, but recent versions provide a high degree of support. The release of StatefulSets made it much easier to deploy and scale stateful applications in a safer and more resilient way. Recent Kubernetes versions have added additional helpful features as well, including dynamic disk provisioning.
Despite the fundamental difference between stateless and stateful applications, Kubernetes still offers a high level of flexibility and automation, whether running on bare metal or on a managed platform. Kubernetes allows us to automate our stateful applications across multiple environments, while maintaining lightning-fast performance and a high degree of availability.