Collect and Analyze Log Data for a Kubernetes Cluster with Bitnami's Elasticsearch, Fluentd and Kibana Charts
Introduction
Once you have multiple applications deployed on a Kubernetes cluster, monitoring them becomes an important, and ongoing, challenge. Typically, you will want an easy way to review and analyze application logs at a high level, but also the flexibility to dive deeper when needed for root cause analysis and troubleshooting.
When it comes to log analysis on Kubernetes, a popular combination today is that of Kibana, Elasticsearch and Fluentd. Bitnami provides up-to-date Helm charts for these three open source tools, making it easy to add them to your cluster and configure them to work together.
- Bitnami's Fluentd chart makes it fast and easy to configure Fluentd to collect logs from pods running in the cluster, convert them to a common format and deliver them to different storage engines.
- Bitnami's Elasticsearch chart provides a Elasticsearch deployment for data indexing and search.
- Bitnami's Kibana chart makes it easy to connect Kibana with Elasticsearch, generate queries and visualize results using an intuitive graphical interface.
This article will walk you through the process, first capturing and storing logs from a Kubernetes cluster using Elasticsearch and Fluentd and then using Kibana to query and analyze log data from a Web application running on the cluster.
Assumptions and prerequisites
This guide makes the following assumptions:
- You have a Kubernetes cluster running with Helm (and Tiller if using Helm v2.x) installed.
- You have the kubectl command line (kubectl CLI) installed.
Step 1: Deploy Elasticsearch
The first step is to deploy Elasticsearch on your Kubernetes cluster using Bitnami's Helm chart. This Elasticsearch deployment will receive formatted log data from Fluentd, index it and store it.
First, execute the following command to add the Bitnami charts repository to Helm:
helm repo add bitnami https://charts.bitnami.com/bitnami
Next, execute the following command to deploy Elasticsearch:
helm install elasticsearch bitnami/elasticsearch
By default, the chart will create an Elasticsearch cluster with separate master and data pods, together with a coordinator pod. Wait for a few minutes until the chart is deployed and note the host name of the coordinator node displayed in the output, as you will need this in the next step.

Step 2: Deploy Kibana
The next step is to deploy Kibana, again with Bitnami's Helm chart. In this case, you will provide the name of the Elasticsearch coordinating pod as a parameter to the Helm chart, so that Kibana can read and query Elasticsearch's indices.
Execute the command below, replacing the ELASTICSEARCH-COORDINATING-NODE-NAME placeholder with the host name of the coordinator node obtained at the end of Step 1:
helm install kibana bitnami/kibana
--set elasticsearch.enabled=false
--set elasticsearch.external.hosts[0]=ELASTICSEARCH-COORDINATING-NODE-NAME
--set elasticsearch.external.port=9200
--set service.type=LoadBalancer
In the previous command, the elasticsearch.enabled parameter disables usage of Kibana's bundled Elasticsearch in favour of the deployment created in Step 1, which is defined in the elasticsearch.external parameters. The service.type parameter ensures that the Kibana deployment is accessible via a load balancer.
After executing the previous command, wait a few minutes for the deployment to complete and for a public IP address to be assigned to the Kibana load balancer. To obtain the load balancer IP address, use the command below:
kubectl get svc kibana
Using a load balancer service will typically assign a public IP address for the deployment. Depending on your cloud provider's policies, you may incur additional charges for this static IP address.
Confirm that you are able to connect to Kibana by browsing to the load balancer's public IP address. You should see the Kibana welcome page, as shown below:

By default, Kibana is configured to be externally accessible without any authentication mechanisms. For production scenarios, follow the authentication guidelines in the Elasticsearch documentation.
Step 3: Configure and deploy Fluentd
The next step is to deploy Fluentd and configure it to relay logs from cluster applications to Elasticsearch. To do this, it is necessary to create two configuration maps, one instructing the forwarder how to parse log entries and the other instructing the aggregator how to send log data to Elasticsearch.
Follow the steps below:
Create a file with the following configuration map at /tmp/elasticsearch-output.yaml:
apiVersion: v1 kind: ConfigMap metadata: name: elasticsearch-output data: fluentd.conf: | # Ignore fluentd own events <match fluent.**> @type null </match> # TCP input to receive logs from the forwarders <source> @type forward bind 0.0.0.0 port 24224 </source> # HTTP input for the liveness and readiness probes <source> @type http bind 0.0.0.0 port 9880 </source> # Throw the healthcheck to the standard output instead of forwarding it <match fluentd.healthcheck> @type stdout </match> # Send the logs to the standard output <match **> @type elasticsearch include_tag_key true host "#{ENV['ELASTICSEARCH_HOST']}" port "#{ENV['ELASTICSEARCH_PORT']}" logstash_format true <buffer> @type file path /opt/bitnami/fluentd/logs/buffers/logs.buffer flush_thread_count 2 flush_interval 5s </buffer> </match>
Create a file with the following configuration map at /tmp/apache-log-parser.yaml:
apiVersion: v1 kind: ConfigMap metadata: name: apache-log-parser data: fluentd.conf: | # Ignore fluentd own events <match fluent.**> @type null </match> # HTTP input for the liveness and readiness probes <source> @type http port 9880 </source> # Throw the healthcheck to the standard output instead of forwarding it <match fluentd.healthcheck> @type stdout </match> # Get the logs from the containers running in the cluster # This block parses logs using an expression valid for the Apache log format # Update this depending on your application log format <source> @type tail path /var/log/containers/*.log pos_file /opt/bitnami/fluentd/logs/buffers/fluentd-docker.pos tag www.log <parse> @type regexp expression /^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] \\"(?<method>\S+)(?: +(?<path>[^ ]*) +\S*)?\\" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$/ time_format %d/%b/%Y:%H:%M:%S %z </parse> </source> # Forward all logs to the aggregators <match **> @type forward <server> host fluentd-0.fluentd-headless.default.svc.cluster.local port 24224 </server> <buffer> @type file path /opt/bitnami/fluentd/logs/buffers/logs.buffer flush_thread_count 2 flush_interval 5s </buffer> </match>
TipThe configuration shown above defines a regular expression that matches the standard Apache log format. You can add additional expressions to the block in order to match logs which use different formats (such as from non-Apache applications running in the cluster). Similarly, depending on the deployment parameters used for the Fluentd chart (deployment name, namespace and cluster domain), you might need to update the server host name shown in the configuration above.
Apply both the configuration maps:
kubectl apply -f /tmp/elasticsearch-output.yaml kubectl apply -f /tmp/apache-log-parser.yaml
Deploy Fluentd by executing the command below:
helm install fluentd bitnami/fluentd --set aggregator.configMap=elasticsearch-output --set forwarder.configMap=apache-log-parser --set aggregator.extraEnv[0].name=ELASTICSEARCH_HOST --set aggregator.extraEnv[0].value=ELASTICSEARCH-COORDINATING-NODE-NAME --set aggregator.extraEnv[1].name=ELASTICSEARCH_PORT --set-string aggregator.extraEnv[1].value=9200 --set forwarder.extraEnv[0].name=FLUENTD_DAEMON_USER --set forwarder.extraEnv[0].value=root --set forwarder.extraEnv[1].name=FLUENTD_DAEMON_GROUP --set forwarder.extraEnv[1].value=root
Here is a quick explanation of what the previous command does and how to replace its placeholders:
The aggregator.configMap and forwarder.configMap parameters define the aggregator and forwarder configuration maps that Fluentd will use to parse and deliver log data.
The aggregator.extraEnv parameters configure the Elasticsearch host name and port for log delivery. Replace the ELASTICSEARCH-COORDINATING-NODE-NAME placeholder with the host name of the coordinator node obtained at the end of Step 1.
The forwarder.extraEnv parameters ensure that Fluentd has the privileges necessary to read log data from cluster pods.
The ELASTICSEARCH_HOST, ELASTICSEARCH_PORT, FLUENTD_DAEMON_USER and FLUENTD_DAEMON_GROUP values in the previous command are not placeholders and should not be replaced.
After executing the previous command, wait a few minutes for the deployment to complete. Confirm that the Fluentd pods are running with the command below:
kubectl get pods | grep fluentd
Here is what you should see:

Step 4: Test the system
At this point, you have a working system, but (assuming you started with a fresh cluster) it doesn't yet have any applications whose logs it can capture or index. Therefore, to test the system, you will deploy WordPress on it and then use Kibana to view the WordPress access logs.
Begin by deploying WordPress using Bitnami's WordPress Helm chart:
helm install wordpress bitnami/wordpress
WarningUsing a load balancer service will typically assign a public IP address for the deployment. Depending on your cloud provider's policies, you may incur additional charges for this static IP address.
Wait a few minutes for the deployment to complete and then obtain the WordPress load balancer's public IP address with the command below:
kubectl get svc wordpress
Browse to the public IP address and confirm that you see the WordPress welcome page with a default blog post, as shown below:

- Browse to the Kibana dashboard and click the "Management" icon in the left navigation bar.
- Select the "Index Patterns" menu item and click the "Create index pattern" button.

- Define a new index pattern named *logstash-**. Confirm that there is at least one index matching this patter and click the "Next step" button.

- Select @timestamp as the time filter field name. Click "Create index pattern".

- Click the "Discover" icon in the left navigation bar.
- Confirm that you see log data from the WordPress container, as shown below:

With this, your test is successful and you should now also be able to see logs from other applications deployed on the cluster within Kibana (so long as they use the Apache log format, or if you adjust the Fluentd configuration shown in Step 3 to parse other formats).
Step 5: Query log data with KQL
You can use Kibana to query the log data streaming from the cluster, or create visualizations to answer specific questions. For example, a review of the log data in the Kibana dashboard will show that each log entry includes a code tag that shows the Apache web server response code: 200 for a successful response, 404 for a missing resource, 503 for a server error, and so on.
This means that you can use Kibana Query Language (KQL) to write a query that retrieves only the 404 responses sent by the WordPress Web server. This, in turn, lets you monitor the number of 404 responses both as an absolute figure and a percentage of total responses over time. You can also drill down into the query results to see the details of individual requests causing 404 responses, and use that to identify broken links in your WordPress site.
Follow the steps below:
- Navigate to the Kibana dashboard and click the "Discover" icon in the left navigation bar.
- Click the "Add filter" button and add a filter for the 404 response code as shown below:

- Test the filter by browsing to http://WORDPRESS-SERVICE-IP/no/such/page, where WORDPRESS-SERVICE-IP is the public IP address of the WordPress deployment's load balancer. This is a non-existent WordPress URL which will generate a 404 response from the Apache web server. Request this page multiple times.
- In the Kibana dashboard, click "Refresh" to update the filter results and visualization. You should see the number of 404 responses tick upwards in response to your requests, as shown below:

You can use the ApacheBench tool (ab) to send multiple concurrent requests and generate multiple 404s with a single command, such as ab -n 100 -c 5 http://WORDPRESS-SERVICE-IP/no/such/page.
This is just a simple example of visualizing log data in Kibana using a single field. Depending on how granular your log data parsing and indexing is, you can create more complex filters and generate more sophisticated visualizations with Kibana.
Useful links
To learn more about the topics discussed in this guide, use the links below: