Rolling Updates for ConfigMap Objects
In this blog post, I will introduce you to hashed ConfigMaps and Secrets, a clever technique implemented and used in Bitnami's Kubernetes Production Runtime: a collection of ready-to-run, pre-integrated services that make it easy to run production workloads in Kubernetes.
Hashed ConfigMaps and Secrets in Bitnami's Kubernetes Production Runtime allow you to trigger a rolling update on Pods whenever an associated ConfigMap or Secret object is changed. Please note that ConfigMap objects are used throughout this article for simplicity, but all of the information also applies to Secrets.
The problem: ConfigMap objects updates are a risk for your cluster
Kubernetes has native semantics that support replacing a container image using a rolling update strategy. Because Kubernetes does not offer a way to notify containers about changes in their ConfigMaps in a rolling fashion, configuration changes are different.
Handling configuration changes atomically poses a great danger to production environments. Configuration errors can lead to outages, cascading failures, or even misbehavior, which is not easy to detect. Having a mechanism to deploy configuration changes via rolling updates allows errors to be detected early so that the update can be aborted and rolled back. Using a rolling update mechanism to deliver configuration changes to containers is a step forward in treating configuration and code in the same way.
Kubernetes treats binaries (containers) and configuration (ConfigMaps and Secrets) very differently. Containers are usually replicated and can be updated using rolling updates. This, allows users and operators to swap the binary used by a container in a controlled way, at a certain rate, while monitoring progress and making it possible to detect problems (for example, via liveness and readiness probes). Configuration, in contrast to containers, can be implemented using ConfigMap objects in Kubernetes. These ConfigMap objects are not replicated.
When a ConfigMap is updated, the following things can happen:
Kubernetes-enabled binaries, which watch for ConfigMap objects, are notified at once when the ConfigMap object is modified. The binaries then reconfigure themselves. Unless specific measures are implemented, this reconfiguration can lead to outages and crashes, etc.
For binaries that are not aware of Kubernetes, the problem must be handled externally: for the configuration changes to take effect, something or someone must restart the binary.
Rolling updates respect the container probes and grace periods, etc. This allows the speed and rate at which container restarts to be controlled. At the same time, failures can be detected early, and the rolling upgrade can be stopped or rolled back.
Bitnami's solution
Bitnami's Kubernetes Production Runtime uses a clever technique to trigger a rolling update on Pods whenever an associated ConfigMap or Secret object is changed: the object name used in ConfigMap or Secret is suffixed with a 7-byte hash derived from the object contents.
The actual implementation is handled by two helper Jsonnet
functions defined in the manifests/lib/utils.libsonnet
file:
# From manifests/lib/utils.libsonnet
...
local hashed = {
local this = self,
metadata+: {
local hash = std.substr(std.md5(std.toString(this.data)), 0, 8),
local orig_name = super.name,
name: orig_name + "-" + hash,
labels+: {name: orig_name},
},
},
HashedConfigMap(name):: kube.ConfigMap(name) + hashed,
HashedSecret(name):: kube.Secret(name) + hashed,
...
Here we can see HashedConfigMap
and HashedSecret
helper Jsonnet
functions defined very similarly:
- They inherit either from
kube.Configmap
orkube.Secret
, respectively. - They also override the object's metadata to suffix the object's name with a 7-byte string. The 7-byte string is computed from the data's MD5 hash. The real unsuffixed object's name is stored as a label (metadata) called
name
.
Example:
# coffee.jsonnet
local utils = import "../lib/utils.libsonnet";
{
coffee_config: utils.HashedConfigMap("coffee") {
data+: {
"type": "coffee",
"foo": "bar",
"bar": "123",
},
},
}
Applying this configuration with kubecfg update coffee.jsonnet
will create a ConfigMap object similar to the following:
$ kubectl get cm
NAME DATA AGE
coffee-a3f14c4 3 10s
Note how the desired name coffee
has been suffixed with -a3f14c4
. These are the first 7 bytes when you compute the MD5 hash over the string representation of the data:
$ echo -n '{"bar": "123", "foo": "bar", "type": "coffee"}' | md5sum | cut -c1-7
a3f14c4
The actual ConfigMap object looks like the following:
$ kubectl describe cm coffee-a3f14c4
Name: coffee-a3f14c4
Namespace: default
Labels: name=coffee
Annotations: <none>
Data
====
bar:
----
123
foo:
----
bar
type:
----
coffee
Events: <none>
A single-bit change in the data associated with this ConfigMap object causes the MD5 hash to change, and hence a change in the name. For instance, let's change the ConfigMap contents, replacing bar
with foo
:
# coffee.jsonnet
local utils = import "../lib/utils.libsonnet";
{
coffee_config: utils.HashedConfigMap("coffee") {
data+: {
"type": "coffee",
"foo": "foo",
"bar": "123",
},
},
}
Applying this new configuration with kubecfg update coffee.jsonnet
creates a new ConfigMap object:
$ kubectl get cm
NAME DATA AGE
coffee-9739f7c 3 3s
coffee-a3f14c4 3 31m
Pay attention to the values listed in the AGE
column: coffee-9739f7c
is the newly created ConfigMap, with the updated contents, while coffee-a3f14c41
is the old one.
The following is a more complete example:
local kube = import "../lib/kube.libsonnet";
local utils = import "../lib/utils.libsonnet";
{
coffee_config: utils.HashedConfigMap("coffee") {
data+: { "type": "coffee", "foo": "foo", "bar": "123", },
},
coffee: kube.Deployment("coffee") {
spec+: {
template+: {
spec+: {
volumes_+: {
config: kube.ConfigMapVolume($.coffee_config),
},
containers_+: {
default+: kube.Container("coffee") {
image: "nginxdemos/hello:plain-text",
ports_: { coffee: { protocol: "TCP", containerPort: 80, }, },
volumeMounts_+: { config: {mountPath: "/tmp", readOnly: true}, },
},
},
},
},
},
},
}
This example declares a deployment with a single container (and a single replica) which uses the ConfigMap from the previous example by mounting it under /tmp.
Applying this new manifest with kubecfg update coffee.jsonnet
creates several new objects:
$ kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
coffee 1 1 1 1 3m17s
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
coffee-699d66474d 1 1 1 4m10s
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
coffee-699d66474d-6jqln 1/1 Running 0 3m57s
The pod references the ConfigMap by name:
$ kubectl describe pod coffee-699d66474d-6jqln
Name: coffee-699d66474d-6jqln
...
Containers:
coffee:
Image: nginxdemos/hello:plain-text
...
Mounts:
/tmp from config (ro)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-m6mls (ro)
...
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: coffee-9739f7c
Optional: false
...
If we change the ConfigMap contents again in the manifest file by replacing the value of the bar
attribute from 123
to 321
, and push the changes, we observe the following:
kubecfg update coffee.jsonnet
INFO Validating deployments coffee
INFO validate object "apps/v1beta1, Kind=Deployment"
INFO Validating configmaps coffee-e07c8b8
INFO validate object "/v1, Kind=ConfigMap"
INFO Fetching schemas for 2 resources
INFO Updating configmaps coffee-e07c8b8
INFO Creating non-existent configmaps coffee-e07c8b8
INFO Updating deployments coffee
Another ConfigMap object is created:
$ kubectl get cm
NAME DATA AGE
coffee-9739f7c 3 32m
coffee-a3f14c4 3 71m
coffee-e07c8b8 3 12s
But more importantly, Kubernetes triggers an update on the Deployment as well as the underlying ReplicaSet and Pod objects:
$ kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
coffee 1 2 1 1 32m
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
coffee-5978d64b6c 1 1 1 7s
coffee-699d66474d 1 1 1 32m
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
coffee-5978d64b6c-p26tx 1/1 Running 0 9s
coffee-699d66474d-6jqln 1/1 Running 0 32m
For a brief period of time, there are two ReplicatSets associated with the Deployment: the older coffee-699d66474d
corresponds to the Pod using the ConfigMap named coffee-9739f7c
while the newer coffee-5978d64b6c
corresponds to the Pod using the newly created ConfigMap
named coffee-e07c8b8
:
$ kubectl describe rs coffee-5978d64b6c
Name: coffee-5978d64b6c
...
Controlled By: Deployment/coffee
Replicas: 1 current / 1 desired
Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
...
Containers:
coffee:
Image: nginxdemos/hello:plain-text
...
Mounts:
/tmp from config (ro)
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: coffee-e07c8b8
Optional: false
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 6m22s replicaset-controller Created pod: coffee-5978d64b6c-p26tx
After some time, the update is finished and the final state is as follows:
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
coffee-5978d64b6c 1 1 1 4m55s
coffee-699d66474d 0 0 0 37m
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
coffee-5978d64b6c-p26tx 1/1 Running 0 4m58s
Conclusions
This example shows how to trigger updates on a Deployment when the contents of a ConfigMap are changed using Bitnami's HashedConfigMap. The HashedSecret abstraction works the same way, but with an underlying Kubernetes Secret. The update mechanism works the same for StatefulSets and DaemonSets.