Deploy a Production-Ready MariaDB Cluster on Kubernetes with Bitnami and Helm

MariaDB is a popular, open source relational database management system that is widely used for enterprise applications and mission-critical use cases. One of its key features is data replication, which allows data to be mirrored across multiple nodes. This feature increases application resilience, ensuring that applications can easily recover from system failures and avoid critical data loss.

To make it easy to deploy MariaDB in production environments, Bitnami now offers a MariaDB Helm chart. The Bitnami MariaDB chart sets up a primary-replica MariaDB replication cluster that is easily configurable, fault-tolerant and stores all your data reliably using persistent volumes. This chart also follows current best practices for security and scalability, thereby ensuring that your MariaDB cluster is ready for immediate production use.

MariaDB cluster

Understanding Deployment Options

The Bitnami MariaDB Helm chart can be deployed on any Kubernetes cluster. With the chart, Bitnami provides two configuration files: values.yaml, which initializes the deployment using a set of default values and is intended for development or test environments, and values-production.yaml, which is intended for production environments.

This blog post will use the Microsoft Azure Container Service (AKS) and the values-production.yaml configuration file, but it's equally easy to deploy the Bitnami MariaDB chart on Google Kubernetes Engine (GKE), Amazon Elastic Container Service (EKS) or even minikube for quick testing.

Deploying the Cluster

To deploy the Bitnami MariaDB chart on AKS, provision a new Kubernetes cluster on Microsoft Azure and then install and configure kubectl with the necessary credentials. You will find a detailed walkthrough of these steps in our AKS guide. Once done, download the values-production.yaml file and deploy the MariaDB chart using the command below, replacing the ROOT_PASSWORD and REPLICATION_PASSWORD placeholders with secure passwords:

$ helm install --name my-release stable/mariadb -f values-production.yaml --set rootUser.password=ROOT_PASSWORD --set replication.password=REPLICATION_PASSWORD

This command creates a deployment with the name my-release. You can use a different release name if you wish - just remember to update it in the previous and following commands. Monitor the pods until the deployment is complete:

$ kubectl get pods -w

Here is a sample of the command output showing the running pods:

Running MariaDB cluster

To check that everything is working correctly, connect to the primary node using a MariaDB client pod, enter the root password when prompted and then run the queries shown below:

$ kubectl run my-release-mariadb-client --rm --tty -i --image docker.io/bitnami/mariadb:10.2.14 --namespace default --command -- bash
$ mysql -h my-release-mariadb.default.svc.cluster.local -uroot -p
Enter password:
MariaDB [none]> SHOW SLAVE HOSTS\G
MariaDB [none]> SHOW PROCESSLIST\G

These queries will provide information on connected replicas and the status of the replication process. If you see output similar to the image below, your cluster is good to go!

MariaDB replicas

Understanding the Default Network Topology and Security

By default, the Bitnami MariaDB deployment is configured with three nodes (one primary and two replicas). However, you can scale the cluster up or down by adding or removing nodes even after the initial deployment. The cluster operates on the standard port 3306. Remote connections are enabled for this port by default.

MariaDB does not have any minimal hardware requirements, so the default virtual machine type provisioned by AKS will work without errors. However, depending on the likely workload of your database cluster, you may want to use machine types optimized for relational database servers such as "memory-optimized" virtual machines.

Understanding Data Replication and Persistence

A key feature of Bitnami's MariaDB Helm chart is that it comes pre-configured to provide a horizontally scalable and fault-tolerant deployment. Data automatically replicates from the primary node to all replica nodes using the binary log. The primary node receives all write operations, while the replica nodes repeat the operations performed by the primary node on their own copies of the data set and are used for read operation. This model improves the overall performance of the solution. It also simplifies disaster recovery, because a copy of the data is maintained on each node in the cluster.

To see replication in action, add some data to the primary node and then check that the same data exists on a replica node:

$ mysql -h my-release-mariadb.default.svc.cluster.local -uroot -p my_database
Enter password:
MariaDB [my_database]> CREATE TABLE test (id int not null, val varchar(255) not null);
MariaDB [my_database]> INSERT INTO test VALUES (1, 'foo'), (2, 'bar');
MariaDB [my_database]> exit

$ mysql -h my-release-mariadb-slave.default.svc.cluster.local -uroot -p my_database
Enter password:
MariaDB [my_database]> SHOW TABLES;
MariaDB [my_database]> SELECT * FROM test;

Data persistence is enabled by default in the chart configuration. A separate persistent volume is used to store the data on the primary node and on each replica node. If a replica fails, a new one will be scheduled automatically. If the primary fails, the application connected to the primary may experience some downtime until a new primary is started. However, there would not be be any data loss as the data is stored in a separate persistent volume.

By default, the values-production.yaml configuration file initializes an 8 GB persistent volume. However, it's possible to modify the storage type or the size of the disk by setting different values at deployment time, as in the example below which configures a 16 GB persistent volume instead:

$ helm install --name my-release stable/mariadb -f values-production.yaml --set rootUser.password=ROOT_PASSWORD --set replication.password=REPLICATION_PASSWORD --set master.persistence.size=16Gi

Scaling the Deployment

You can easily scale the cluster up or down by adding or removing nodes. For example, to scale the number of replicas up to 5, get the name of the replica StatefulSet and then use kubectl, as shown below:

$ kubectl get sts -l "release=my-release,component=slave"
$ kubectl scale sts/my-release-mariadb-slave --replicas=5
MariaDB scale-up

Wait for the new nodes to become active and then check that the 5 replica nodes have been correctly added to the replication cluster using the SHOW PROCESSLIST and SHOW SLAVE HOSTS commands shown earlier.

MariaDB scaled-up cluster

If you already have Prometheus enabled in your Kubernetes cluster, the Bitnami MariaDB Helm chart comes preconfigured to work with Prometheus, making it easy to monitor the status of your MariaDB deployment. See the full list of Prometheus-related parameters in the chart template.

Updating the Deployment

You can update to the latest version with these commands:

$ helm repo update
$ helm upgrade my-release stable/mariadb -f values-production.yaml --set rootUser.password=ROOT_PASSWORD --set replication.password=REPLICATION_PASSWORD

If this sounds interesting to you, why not try it now? Deploy the Bitnami MariaDB Helm chart now on Microsoft Azure Container Service (AKS) and then tweet @bitnami and tell us what you think!