Create a Pub-Sub Messaging Cluster with Bitnami's Kafka and Zookeeper Containers

Apache Kafka is a well-known open source tool for real-time message streaming, used in combination with Apache Zookeeper to create scalable, fault-tolerant clusters for application messaging. At Bitnami, we've recently updated our Apache Kafka and Apache Zookeeper container images to make it quick and easy to create a scalable publish-subscribe messaging cluster for your applications.

This blog post will walk you through a simple example of creating a Kafka messaging cluster. It will assume that you already have a Docker environment with Git and Docker Compose installed.

Starting the Cluster

Begin by cloning the Bitnami Docker Kafka repository:

$ git clone https://github.com/bitnami/containers.git

This repository already includes a Docker Compose file for cluster operations. So, to start the cluster, simply change to the repository directory and use the command below to start the cluster:

$ cd containers/bitnami/kafka
$ docker-compose -f docker-compose-cluster.yml up -d

This should start a three-node Kafka cluster with an additional Zookeeper management node. Verify that the cluster is running with a quick docker ps:

Running Kafka/Zookeeper cluster

Testing the Cluster

Now for the fun part. Create a new topic named mytopic for messages (update the container name as needed for your environment):

$ docker exec -it bitnamidockerkafka_kafka1_1 kafka-topics.sh --create --zookeeper zookeeper:2181 --replication-factor3 --partitions 3 --topic mytopic

Once the topic has been created, start the Kafka message consumer using one of the nodes. This consumer will connect to the cluster and retrieve and display messages as they are published to the mytopic topic.

$ docker exec -it bitnamidockerkafka_kafka3_1 kafka-console-consumer.sh --zookeeper zookeeper:2181 --topic mytopic --from-beginning

Log in using a different console and produce some messages on a different node by running the command below and then entering some messages, each on a separate line.

$ docker exec -it bitnamidockerkafka_kafka1_1 kafka-console-producer.sh --broker-list kafka1:9092 --topic mytopic
test message 1
test message 2
...

The same messages should appear in the Kafka message consumer, as shown below:

Cluster messaging with Kafka/Zookeeper

Your cluster is now operational and your application can now connect to the cluster to pick up or send messages as needed.

In case things don't work out as shown above, you can use docker-compose logs to obtain debugging information from the Kafka logs. Also, remember that running a three-node Kafka cluster on a single host is not for the faint of heart, so ensure that your host has sufficient memory and CPU resources to handle the load.

Try it out for yourself by getting our Apache Kafka container image, or check out our other containers!