Deploy a Redis Sentinel Kubernetes cluster using Bitnami Helm charts

Introduction

Redis is an open source in-memory remote database that supports many different data structures: strings, hashes, lists, sets, sorted sets, and more. It has multiple uses: as a distributed cache, as a database, or as a message broker. The distributed cache use of Redis is the most common case for building robust, scalable, and high-availability applications.

Bitnami provides Kubernetes users with the latest stable Redis Helm chart that allows you to deploy Redis simply and reliably in a production environment. The Bitnami Redis chart configures a cluster with four nodes by default, ensuring the data persistence in all nodes using replication. The chart includes two values.yml files:

  • values.yml: Ideal for testing. This file only defines a single-node cluster.
  • values-production.yml: Specifies production parameters. This file defines a four-node cluster.

The values-production.yml file includes a lot of parameters that enable you to immediately run a Redis cluster in any production Kubernetes cluster. By editing the values-production.yml, you can set the number of slave nodes at deployment time and also enable Sentinel to ensure high-availability to your deployment. Thus, if the master node fails, Sentinel will start a failover process for promoting a slave to master and reconfiguring the rest of the nodes to read from the new master.

In this guide, you will learn how to deploy Redis on a Kubernetes cluster with Sentinel enabled for monitoring nodes and ensuring failover. Once the cluster is deployed, you will test the cluster data replication and cluster metrics by accessing the metrics pod. Also, you will test how Sentinel works when the master node fails.

Assumptions and prerequisites

This guide makes the following assumptions:

Step 1: Define configuration values for the Bitnami Redis Helm chart

To begin the process, you need to obtain the values-production.yaml file included in the Redis chart. Edit it to enable Sentinel. Follow these instructions:

curl -Lo values-production.yaml https://raw.githubusercontent.com/helm/charts/master/stable/redis/values-production.yaml
  • Open the values-production.yaml file and edit the "sentinel" section as shown below:
## Use redis sentinel in the redis pod. This will disable the master and slave services and
## create one redis service with ports to the sentinel and the redis instances
sentinel:
  enabled: true

Step 2: Deploy the Bitnami Redis Helm chart

The next step is to deploy the Bitnami Redis chart by using the latest version of the chart.

  • Make sure that you can connect to your Kubernetes cluster by running the following:
kubectl cluster-info
  • Install the latest version of the chart using the values-production.yaml file as shown below:
helm install stable/redis --values values-production.yaml

It will deploy a cluster with four nodes (one master and three slaves) and a total number of five pods - one per node and an additional one for metrics. See the output displayed:

Deployment output
  • Check the status of the pods by running the following:
kubectl get pods
Deployment pods

Step 3: Test Redis cluster data replication

The Redis cluster is configured to allow the master node to handle write and read operations. It is also configured to persist data using volumes. The rest of the nodes are configured as read-only nodes. The data received by the master node is replicated in slaves, so they always keep a copy of the master node data.

To test how the replication works in the Redis cluster, follow the steps below:

  • To access the master node, it is necessary to run a Redis client in a separate pod. To do so, execute the command that was displayed in the NOTES section of the chart installation output. Replace the DEPLOYMENT-NAME placeholder with the name assigned to the deployment when installing the chart.
   kubectl run --namespace default DEPLOYMENT-NAME-redis-client --rm --tty -i --restart='Never' \
    --env REDIS_PASSWORD=$REDIS_PASSWORD \--labels="redis-client=true" \
   --image docker.io/bitnami/redis:5.0.5-debian-9-r141 -- bash
Get the deployment nameRun a client pod
  • The chart creates a headless service for internal access. To access the master node, it is necessary to get the name of the headless service which is running in all cluster pods. Run the following command:
kubectl get svc

You will see output similar to this:

Redis get services
Tip

To learn more about Headless Services in Kubernetes, visit the official Kubernetes documentation page.

  • Access the master node by running the redis-cli command as shown below. Remember to replace POD-NAME with the name of the pod you want to access and HEADLESS-SVC-NAME with the name of the headless service you get in the previous step:
redis-cli -h POD-NAME.HEADLESS-SVC-NAME -a $REDIS_PASSWORD
  • Now you can use the info command to see a complete report of the node status. To see the specific information about data replication, use the info replication command as shown below:
Check replication info
  • To test the replication between the master and the slave nodes, define a new variable and define a new value using the set command. Then, check that the value has been successfully defined by executing the get command:
> set foo hello world
OK
> get foo
"hello world"
  • Disconnect from the master node and connect to one of the slave nodes. Execute the following by replacing the POD-NAME with the name of the pod you want to access and HEADLESS-SVC-NAME with the name of the headless service:
redis-cli -h POD-NAME.HEADLESS-SVC-NAME -a $REDIS_PASSWORD
  • To check that the value defined in the master node was replicated in slaves, execute the get command:
> get foo
"hello world"

Step 4: Check cluster metrics

First, to check the cluster metrics, you need to make the metrics pod accessible from your local system. Execute the kubectl get svc to get the name of the metrics pod service and its port:

Get service name and port of the metrics pod
  • Port forward the metrics pod by running the command below. Replace SVC-NAME and PORT with the name of the service and the port you obtained in the step above:
kubectl port-forward svc/SVC-NAME PORT:PORT

The port now is forwarded to your localhost:

Port forward
  • To get the cluster metrics, open a new terminal and invoke the data collected by the metrics pod:
curl 127.0.0.1:PORT/metrics

You will see an output message similar to this. As the image shows, the metrics pod continuously collects a lot of data from the cluster, including CPU usage and memory usage:

Get cluster metrics

Step 5: Check Sentinel failover

In this step, you will test cluster failover with Sentinel by simulating an unexpected failure of the master node. Follow these instructions.

  • To access Sentinel, connect to the master node as explained in step 3 by executing the command below. Remember to replace the POD-NAME placeholder with the name of the pod you want to access and HEADLESS-SVC-NAME placeholder with the name of the headless service associated with your pods.
redis-cli -h POD-NAME.HEADLESS-SVC-NAME -a $REDIS_PASSWORD
  • Once you are in the master node, access Sentinel using the redis-cli command shown at the bottom of the "Notes" section displayed when installing the chart. Replace the POD-NAME placeholder with the master pod name and HEADLESS-SVC-NAME with the headless service name you used to connect to Redis in the step above:
redis-cli -h  POD-NAME.HEADLESS-SVC-NAME  -p 26379 -a $REDIS_PASSWORD
  • Execute the sentinel masters command to display all the information related to the current master. Note the IP address of the master; you will check later how that IP address changes when Sentinel promotes a slave pod as a master:
Check current master info
  • Exit the Redis command line.
  • To force the master node to shut down, you will use the StatefulSets API object to scale down the cluster. Since the goal is to terminate a specific pod, you can run the kubectl get sts command to get the master pod identity (POD-ID in the command below) and then, scale it down to 0 replicas as follows:
kubectl scale sts POD-ID --replicas=0
Scale down the master node
Tip

Learn more about how to use StatefulSet to manage Kubernetes deployments and scale a set of pods by checking the Kubernetes official documentation.

This process can take some minutes to complete. You can check the status of each pod by running the kubectl get pods command:

Scale down the master node

After some minutes, execute the kubectl get pods command again. You will see that the master pod has dissappeared from the list of running pods. Now you should see four pods - three slaves and one for metrics:

Get pods after master finished
  • To monitor the process of Sentinel promoting a slave node as a new master, you can pick one of the pods and check the Sentinel container activity within that pod. Run the kubectl logs by replacing POD-NAME with the name of the pod you want to check:
kubectl logs POD-NAME --container=sentinel -f

As you can see in the output below, Sentinel first checked if the original master node was available. Since that node failed, it switched the master role to one of the slaves:

Check the master nodeSwitch the master node

Now you can check if pod selected by Sentinel is acting as a master node. To do so, connect to it to check its new role by running the info replication command. Follow these instructions:

  • Get the pod name that is running in the new master's IP address:
kubectl get pods -o wide

You will see output similar to this. Note down the POD-NAME of the new master node:

Get new master's pod name
  • To access the new master node, run a pod as a Redis client and then, access the new master node by running the redis-cli command as described in step 3 for testing cluster data replication.
  • Execute the info replication command and check that in the "Replication" section, the role assigned to the current pod is master.
Check master role of the new pod
Tip

You can check that the data replication is still working correctly by defining a new variable in the new master node, and check if that value is replicated in other slave as explained in step 3 for testing cluster data replication.

Congratulations, you have a fully functional Redis cluster running on Kubernetes with failover and data replication.

Useful links