Bitnami ELK Installer

NOTE: Before running the commands shown on this page, you should load the Bitnami stack environment by executing the installdir/use_APPNAME script (Linux and Mac OS X) or by clicking the shortcut in the Start Menu under "Start -> Bitnami APPNAME Stack -> Application console" (Windows). Learn more.
NOTE: When running the commands shown on this page, replace the installdir placeholder with the full installation directory for your Bitnami stack.

Description

The ELK stack is a log management platform consisting of Elasticsearch (deep search and data analytics), Logstash (centralized logging and parsing) and Kibana (powerful data visualizations).

First steps with the Bitnami ELK Stack

Welcome to your new Bitnami application! Here are a few questions (and answers!) you might need when first starting with your application.

What are the system requirements?

Before you download and install your application, check that your system meets these requirements.

How do I install the Bitnami ELK Stack?

Windows, OS X and Linux installer
  • Download the executable file for the Bitnami ELK Stack from the Bitnami website.

  • Run the downloaded file:

    • On Linux, give the installer executable permissions and run the installation file in the console.
    • On other platforms, double-click the installer and follow the instructions shown.

Check the FAQ instructions on how to download and install a Bitnami Stack for more details.

The application will be installed to the following default directories:

Operating System Directory
Windows C:\Bitnami\APPNAME-VERSION
Mac OS X /Applications/APPNAME-VERSION
Linux /opt/APPNAME-VERSION (running as root user)
OS X VM
  • Download the OS X VM file for the Bitnami ELK Stack from the Bitnami website.
  • Begin the installation process by double-clicking the image file and dragging the WordPress OS X VM icon to the Applications folder.
  • Launch the VM by double-clicking the icon in the Applications folder.

What credentials do I need?

You need application credentials, consisting of a username and password. These credentials allow you to log in to your new Bitnami application.

What is the administrator username set for me to log in to the application for the first time?

  • For Windows, Linux and OS X installers, the username was configured by you when you first installed the application.
  • For OS X VMs, the username can be obtained by clicking the Bitnami badge at the bottom right corner of the application welcome page.

What is the administrator password?

  • For Windows, Linux and OS X installers, the password was configured by you when you first installed the application.
  • For OS X VMs, the password can be obtained by clicking the Bitnami badge at the bottom right corner of the application welcome page.

Getting started with Bitnami ELK Stack

To get started with Bitnami ELK Stack, we suggest the following example to read the Apache access_log and check the requests per minute to the ELK server:

Step 1: Configure Logstash

  • Load the ELK environment before starting the configuration of Logstash:

     $ sudo installdir/use_elk 
    
  • Stop the Logstash service:

     $ sudo installdir/ctlscript.sh stop logstash
    
  • Create the file installdir/logstash/conf/access-log.conf as below:

     input {
         file {
             path => "installdir/apache2/logs/access_log"
             start_position => beginning
         }
     }
        
     filter {
         grok {
             match => { "message" => "COMBINEDAPACHELOG %{COMMONAPACHELOG} %{QS:referrer} %{QS:agent}" }
         }
         date {
             match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
        }
     }
    
     output {
         elasticsearch {
             hosts => [ "127.0.0.1:9200" ]
         }
     }
    
  • Check the configuration is OK. You should see an output message like below:

     $ installdir/logstash/bin/logstash -f installdir/logstash/conf/ --config.test_and_exit
     Configuration OK
    
  • Start the Logstash service:

     $ sudo installdir/ctlscript.sh start logstash
    

Step 2: Check Elasticsearch

  • Access your server via browser in order to generate data (http://localhost/).
  • Check Elasticsearch is receiving data. You should see an index called logstash-DATE:

     $ curl 'localhost:9200/_cat/indices?v'
        
     health status index               pri rep docs.count docs.deleted store.size pri.store.size
     green  open   .kibana               1   0          1            0      3.1kb          3.1kb
     yellow open   logstash-2017.02.21   5   1          1            2     11.2kb         11.2kb
    

Step 3: Configure Kibana pattern

  • Access the Kibana app via browser (http://localhost/elk/app/kibana), and use your user/password to pass the basic HTTP authentication.
  • Specify a timestamp by entering this value to the "Available Fields -> @timestamp" field.
  • Click the "Create" green button.
  • On the left bar, click the "Discover" menu item. You should see something like the screenshot below:

ELK data

Step 4: Create a Kibana dashboard

  • On the left bar, click "Visualize" menu item.
  • Select the "Vertical bar chart -> From a new search" menu options.
  • Select "logstash-*" index.
  • Click the "X-Axis -> Aggregation -> Date Histogram" button secuence.
  • Select "Minute" in the "Interval" field, and click "Apply changes" button.

ELK visualization

  • Save the visualization.
  • On the left bar, click "Dashboard" menu item.
  • Click the "Add" button, select the previous visualization and save the dashboard.

ELK dashboard

What is the default configuration?

ELK default configuration

Elasticsearch configuration file

The main configuration file for Elasticsearch is installdir/elasticsearch/config/elasticsearch.yml.

Elasticsearch ports

By default, Elasticsearch will use port 9200 for requests and port 9300 for communication between nodes within the cluster. If these ports are in use when the server starts, it will attempt to use the next available port, such as 9201 or 9301.

Set custom ports using the configuration file, together with details such as the cluster name (elasticsearch by default), node name, address binding and discovery settings. All these settings are needed to add more nodes to your Elasticsearch cluster.

Elasticsearch log file

The ELK log file is created at installdir/elasticsearch/logs/elasticsearch.log.

Logstash default configuration

Logstash configuration file

The main configuration file for Logstash is installdir/logstash/conf/logstash.conf.

Logstash port

By default, Logstash will use port 9600. If this port is in use when the server starts, it will attempt to use the next available port, such as 9601.

Logstash log file

The Logstash log file is created at installdir/logstash/logs/logstash.log.

Kibana default configuration

Kibana configuration file

The main configuration file for Kibana is installdir/kibana/config/kibana.yml.

Kibana ports

By default, Kibana will use port 5601. If this port is in use when the server starts, it will attempt to use the next available port, such as 5602.

You can set a custom port using the configuration file, together with details such as the Elasticsearch URL (http://127.0.0.1:9200 by default), Kibana index, default application to load or verbosity level.

Kibana log file

The Kibana log file is created at installdir/kibana/logs/kibana.log.

How to change the Elasticsearch password?

Follow the steps below to change the Elasticsearch password:

  • Execute the following command. You will be prompted to enter a new password for the user user.

    $ sudo installdir/apache2/bin/htpasswd -c installdir/elasticsearch/apache-conf/password user
    
  • Restart the Apache server:

    $ sudo installdir/ctlscript.sh restart apache
    

Now, you can access Elasticsearch using the new password.

How to connect remotely?

How to connect remotely to Elasticsearch?

IMPORTANT: Bitnami Native Installers do not modify the firewall configuration of your computer, therefore the ELK ports could be open which is a significant security risk. You are strongly advised to close the ELK ports (refer to the FAQ for more information on this).

To access the ELK server from another computer or application, make the following changes to the node's installdir/elasticsearch/config/elasticsearch.yml file:

  • network.host: Specify the hostname or IP address where the server will be accesible. Set it to 0.0.0.0 to listen on every interface.

  • network.publish_host: Specify the host name that the node publishes to other nodes for communication.

How to connect remotely to Logstash using SSL certificates?

It is strongly recommended to create an SSL certificate and key pair in order to verify the identity of ELK Server. In this example, we are going to use Filebeat to ship logs from our client servers to our ELK server:

  • Add the ELK Server's private IP address to the subjectAltName (SAN) field of the SSL certificate on the ELK server. To do so, open the OpenSSL configuration file (installdir/common/openssl/openssl.cnf), find the [ v3_ca ] section in the file, and add this line under it (substitute in the ELK server's private IP address for the IP_ADDRESS placeholder):

      subjectAltName = IP: IP_ADDRESS
    
  • Generate the SSL certificate and private key in the appropriate locations (e.g. installdir/logstash/ssl/), with the following commands:

      $ cd installdir/logstash/ssl/
      $ openssl req -config installdir/common/openssl/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout logstash-remote.key -out logstash-remote.crt
    
  • Configure Logstash (installdir/logstash/conf/) to add SSL certificates for the input protocol. The code below will add SSL certificates for the Beats plugin:

      input {
        beats {
          port => 5044
          ssl => true
          ssl_certificate => "installdir/logstash/ssl/logstash-remote.crt"
          ssl_key => "installdir/logstash/ssl/logstash-remote.key"
        }
      }
    
  • Restart Logstash:

      $ sudo installdir/ctlscript.sh restart logstash
    
  • Open port 5044 in the ELK server firewall

  • The logstash-remote.crt file should be copied to all the client instances that send logs to Logstash.

  • Install Filebeat in the client machine. For example, the commands below will install Filebeat:

    $ echo "deb https://packages.elastic.co/beats/apt stable main" |  sudo tee -a /etc/apt/sources.list.d/beats.list
    $ wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
    
    • Debian:

       $ sudo apt-get update
       $ sudo apt-get install filebeat
      
    • CentOS:

       $ sudo yum install filebeat
      
  • Configure Filebeat. In this example, we need to add the lines below in the filebeat configuration file (by default /etc/filebeat/filebeat.yml) to send syslog logs:

      filebeat:
        prospectors:
          -
            paths:
              - /var/log/auth.log
              - /var/log/syslog
              #  - /var/log/*.log
      ...
            document_type: syslog
      ...
      output:
        logstash:
          hosts: ["elk_server_private_ip:5044"]
          bulk_max_size: 1024
      ...
          tls:
            certificate_authorities: ["<logstash-remote.crt_path>"]
      ...
    
  • Restart Filebeat service:

    • Debian:

      $ sudo service filebeat restart
      
    • CentOS:

      $ sudo systemctl restart filebeat
      

How to start or stop the services?

Linux

Bitnami native installers include a graphical tool to manage services. This tool is named manager-linux-x64.run on Linux and is located in the installation directory. To use this tool, double-click the file and then use the graphical interface to start, stop or restart services. Server log messages can be checked in the "Server Events" tab.

Management tool

The native installer also includes a command-line script to start, stop and restart applications, named ctlscript.sh. This script can be found in the installation directory and accepts the options start, stop, restart, and status. To use it, log in to the server console and execute it following the examples below:

  • Call it without any service names to start all services:

      $ sudo installdir/ctlscript.sh start
    
  • Use it to restart a specific service only by passing the service name as argument - for example, mysql, postgresql or apache:

      $ sudo installdir/ctlscript.sh restart mysql
      $ sudo installdir/ctlscript.sh restart postgresql
      $ sudo installdir/ctlscript.sh restart apache
    
  • Obtain current status of all services:

      $ installdir/ctlscript.sh status
    

The list of available services varies depending on the required components for each application.

Mac OS X

Bitnami native installers include a graphical tool to manage services. This tool is named manager-osx on Mac OS X and is located in the installation directory. To use this tool, double-click the file and then use the graphical interface to start, stop or restart services. Server log messages can be checked in the "Server Events" tab.

Management tool

The native installer also includes a command-line script to start, stop and restart applications, named ctlscript.sh. This script can be found in the installation directory and accepts the options start, stop, restart, and status. To use it, log in to the server console and execute it following the examples below:

  • Call it without any service names to start all services:

    $ sudo installdir/ctlscript.sh start
    
  • Use it to restart a specific service only by passing the service name as argument - for example, mysql or apache:

     $ sudo installdir/ctlscript.sh restart mysql
     $ sudo installdir/ctlscript.sh restart apache
    
  • Obtain current status of all services:

     $ installdir/ctlscript.sh status
    

The list of available services varies depending on the required components for each application.

NOTE: If you are using the stack manager for Mac OS X-VM, please check the following blog post to learn how to manage services from its graphical tool.

Windows

Bitnami native installers include a graphical tool to manage services. This tool is named manager-windows.exe on Windows and is located in the installation directory. To use this tool, double-click the file and then use the graphical interface to start, stop or restart services. Server log messages can be checked in the "Server Events" tab.

Management tool

The Windows native installer creates shortcuts to start and stop services created in the Start Menu, under "Programs -> Bitnami APPNAME Stack -> Bitnami Service". Servers can also be managed from the Windows "Services" control panel. Services are named using the format APPNAMESERVICENAME, where APPNAME is a placeholder for the application name and SERVICENAME is a placeholder for the service name. For example, the native installer for the Bitnami WordPress Stack installs services named wordpressApache and wordpressMySQL.

These services will be automatically started during boot. To modify this behaviour, refer to the section on disabling services on Windows.

How to access the administration panel?

Access the administration panel by browsing to http://localhost//app/kibana.

How to install a plugin?

How to install a plugin on Elasticsearch?

Install plugins with the plugin tool provided by Elasticsearch. For example, the command below will install the ICU plugin plugin:

$ cd installdir/elasticsearch
$ sudo bin/elasticsearch-plugin install analysis-icu

Once the plugin has been installed, change the user and group ownership of the plugin directory to the elasticsearch user. For example:

$ sudo chown -R elasticsearch:elasticsearch installdir/elasticsearch/plugins/analysis-icu/

How to install a plugin on Logstash?

Logstash supports input, filter, codec and output plugins. These are available as self-contained gems (RubyGems.org). You can install, uninstall and upgrade plugins using the Command Line Interface (CLI) invocations described below:

  • Install a plugin:

      $ cd installdir/logstash
      $ bin/logstash-plugin install PLUGIN
    
  • Update a plugin:

      $ bin/logstash-plugin update PLUGIN
    
  • List all installed plugins:

      $ bin/logstash-plugin list
    
  • Uninstall a plugin (for Logstash <= 2.4 versions):

      $ bin/logstash-plugin uninstall PLUGIN
    

How to install a plugin on Kibana?

Add-on functionality for Kibana is implemented with plug-in modules.

  • Install a plugin:

      $ cd installdir/kibana
      $ bin/kibana-plugin install ORG/PLUGIN/VERSION
    
  • List all installed plugins:

      $ bin/kibana-plugin list
    
  • Remove a plugin:

      $ bin/kibana-plugin remove PLUGIN
    

You can also install a plugin manually by moving the plugin file to the plugins directory and unpacking the plugin files into a new directory.

How to install X-Pack?

X-Pack is an extension which adds additional features to Elasticsearch and Kibana, such as security enhancements, machine learning features and others.

In order to install X-Pack into the ELK stack, please follow the steps in the sections below.

Install X-Pack into Elasticsearch

The steps below describe how to install the X-Pack plugin into Elasticsearch:

  • Stop Elasticsearch:

      $ sudo installdir/ctlscript.sh stop elasticsearch
    
  • Install the X-Pack plugin in the installdir/elasticsearch directory:

     $ cd installdir/elasticsearch
     $ sudo bin/elasticsearch-plugin install x-pack
    
  • Update the ownership for newly created files and directories, so they are accessible for Elasticsearch:

     $ sudo chown -R elasticsearch:elasticsearch config/elasticsearch.keystore config/x-pack
    
  • Make sure that the host for Elasticsearch is publicly accessible for X-Pack:
    • Open installdir/elasticsearch/config/elasticsearch.yml and update the network.publish_host property value to your server IP address.

      NOTE: X-Pack needs to access Elasticsearch on its assigned port (by default 9200). If you cannot access the port via the IP address mentioned above, change it to 127.0.0.1 and save (this way X-Pack can access Elasticsearch locally). An alternative is to open the port in your firewall, as described in the FAQ.
    • Start Elasticsearch:

         $ sudo installdir/ctlscript.sh start elasticsearch
      
  • Generate X-Pack default passwords (note down the passwords you obtain for the elastic and kibana users):

     $ sudo bin/x-pack/setup-passwords auto
    

Disable Apache HTTP authentication

For security purposes, Bitnami enables HTTP authentication for Kibana. However, the X-Pack plugin enables HTTP authentication by default, making Kibana inaccessible.

In order to access Kibana again, please follow the steps below in order to disable the HTTP authentication enabled by Bitnami:

  • In the installdir/elasticsearch/apache-conf/elasticsearch.conf file, remove the following lines and save:

     <LocationMatch "^/(elasticsearch|elk).*?">
       AuthType Basic
       AuthName "Insert your Elasticsearch credentials. If you have problems visit: https://docs.bitnami.com/?page=apps&name=elasticsearch"
       AuthBasicProvider file
       AuthUserFile "installdir/elasticsearch/apache-conf/password"
       Require user user
     </LocationMatch>
    
  • Restart Apache:

     $ sudo installdir/ctlscript.sh restart apache
    

Install X-Pack into Kibana

The steps below describe how to install the X-Pack plugin into Kibana:

  • Stop Kibana:

     $ sudo installdir/ctlscript.sh stop kibana
    
  • Install the X-Pack plugin in the installdir/kibana directory (this step may take up to 30 minutes):

     $ cd installdir/kibana
     $ sudo bin/kibana-plugin install x-pack
    
  • Modify Kibana configuration for X-Pack to work with the Apache frontend server, to do so:
    • Open installdir/kibana/config/kibana.yml.
    • Add the following lines and save, replacing KIBANA_PASSWORD with the credentials for the kibana user you created above:

         elasticsearch.username: kibana
         elasticsearch.password: KIBANA_PASSWORD
         xpack.reporting.kibanaServer.port: 80
         xpack.reporting.kibanaServer.protocol: http
      
  • Start Kibana:

     $ sudo installdir/ctlscript.sh start kibana
    

You can now access Kibana at http://localhost/elk/ with the credentials you created above.

How to create a full backup of Elasticsearch data?

Backup

Elasticsearch provides a snapshot function that you can use to back up your data. Follow these steps:

  • Register a repository where the snapshot will be stored. This may be a local directory or cloud storage (which requires additional plugins). In this example, we will use a local repository, which can be initialized via the Elasticsearch REST API with the following commands:

     $ cd /home/bitnami
     $ mkdir backups
     $ chown elasticsearch:bitnami /home/bitnami/backups/
     $ chmod u+rwx /home/bitnami/backups/
    
  • Update the installdir/elasticsearch/config/elasticsearch.yml file and add the path.repo variable to it as shown below, pointing to the above repository location:

     path.repo: ["/home/bitnami/backups"]
    
  • Initialize the repository via the Elasticsearch REST API with the following commands:

     $ curl -XPUT 'http://localhost:9200/_snapshot/my_backup' -d '{
         "type":"fs",
         "settings":{
             "location":"/home/bitnami/backups/my_backup",
             "compress":true
         }
     }'
    

    The location property has to be set to the absolute path to the backup files. In this example, my_backup is the name of the backup repository.

    See registered repositories with this command:

     $ curl -XGET 'http://localhost:9200/_snapshot?pretty'
    
  • Once the repository is registered, launch the backup with the following command:

     $ curl -XPUT 'localhost:9200/_snapshot/my_backup/snapshot_1?wait_for_completion=true&pretty'
    

    In this example, my_backup is the name of the repository created previously and snapshot_1 is the name for the backup. The wait_for_completion option will block the command line until the snapshot is complete. To create the snapshot in the background, simply omit this option, as shown below:

     $ curl -XPUT 'localhost:9200/_snapshot/my_backup/snapshot_1'
    

Restore

To restore a backup over existing data, follow these steps:

  • Close the specific indices that will be overwritten with this command:

     $ curl -XPOST 'localhost:9200/my_index/_close'
    

    Optionally, close all indices:

     $ curl -XPOST 'localhost:9200/_all/_close'
    
  • Restore the backup with the following command. This command will also reopen the indices closed before.

     $ curl -XPOST 'localhost:9200/_snapshot/my_backup/snapshot_1/_restore'
    

For more information, refer to the official documentation.

How to upgrade ELK?

NOTE: It's highly recommended to perform a backup before any upgrade.

Upgrade Elasticsearch

Since version 0.90.7, Elasticsearch supports rolling upgrades. As a result, it's not necessary to stop the entire cluster during the upgrade process. Instead, it is possible to upgrade one node at a time and keep the rest of the cluster operating normally.

To upgrade a node, follow the steps below:

  • Disable shard reallocation using the command below:

     $ curl -XPUT localhost:9200/_cluster/settings -d '{
         "transient" : {
             "cluster.routing.allocation.enable" : "none"
         }
     }'
    
  • Stop non-essential indexing and perform a synced flush (optional):

     $ curl -XPOST 'http://localhost:9200/_flush/synced'
    
  • Stop the node:

     $ curl -XPOST 'http://localhost:9200/_cluster/nodes/_local/_shutdown'
     $ sudo installdir/ctlscript.sh stop elasticsearch
    
  • Download the latest version.

  • Extract to a new directory (not overwriting the current installation) - for example, /tmp/new_elasticsearch.

  • Rename old files:

     $ cd installdir
     $ sudo mv elasticsearch/bin elasticsearch/old_bin
     $ sudo mv elasticsearch/lib elasticsearch/old_lib
     $ sudo mv elasticsearch/modules elasticsearch/old_modules
    
  • Copy files from new installation directory:

     $ sudo cp -r /tmp/new_elasticsearch/bin elasticsearch/bin
     $ sudo cp -r /tmp/new_elasticsearch/lib elasticsearch/lib
     $ sudo cp -r /tmp/new_elasticsearch/modules elasticsearch/modules
    
  • Start the node again:

     $ sudo installdir/ctlscript.sh start elasticsearch
    
  • Remove the replicas:

     $ curl -XPUT 127.0.0.1:9200/_settings -d{"number_of_replicas":0}
    
  • Confirm that the node joins the cluster:

     $ curl -XGET 'http://localhost:9200/_cat/nodes'
    
  • Re-enable shard reallocation:

     $ curl -XPUT localhost:9200/_cluster/settings -d '{
         "transient" : {
             "cluster.routing.allocation.enable" : "all"
         }
     }'
    
  • Wait for the node to recover:

     $ curl -XGET 'http://localhost:9200/_cat/health'
    

Repeat the process for all remaining nodes of your cluster.

Upgrade Logstash

To upgrade Logstash, follow the steps below:

  • Stop the service:

     $ sudo installdir/ctlscript.sh stop logstash
    
  • Download the latest version.

  • Extract to a new directory (not overwriting the current installation) - for example, /tmp/new_logstash.

  • Backup old files:

     $ cd installdir
     $ sudo cp logstash old_logstash
    
  • Copy files from new installation directory:

     $ sudo cp -r /tmp/new_logstash/* logstash/
    
  • Test your configuration file:

     $ logstash -t -f installdir/logstash/conf/logstash.conf
    
  • Start the service again:

     $ sudo installdir/ctlscript.sh start logstash
    

Upgrade Kibana

To upgrade Kibana, follow these steps:

  • Create a snapshot of the existing .kibana index

  • Stop the service:

     $ sudo installdir/ctlscript.sh stop kibana
    
  • Download the latest version.

  • Extract to a new directory (not overwriting the current installation) - for example, /tmp/new_kibana.

  • Take note of the Kibana plugins that are already installed:

     $ kibana/bin/kibana-plugin list
    
  • Backup old files:

     $ cd installdir
     $ sudo cp kibana old_kibana
    
  • Copy files from new installation directory:

     $ sudo cp -r /tmp/new_kibana/* kibana/
    
  • Recover the kibana.yml file:

     $ cp old_kibana/config/kibana.yml kibana/config/kibana.yml
    
  • Start the service again:

     $ sudo installdir/ctlscript.sh start kibana
    

How to create an SSL certificate?

OpenSSL is required to create an SSL certificate. A certificate request can then be sent to a certificate authority (CA) to get it signed into a certificate, or if you have your own certificate authority, you may sign it yourself, or you can use a self-signed certificate (because you just want a test certificate or because you are setting up your own CA).

Follow the steps below for your platform.

Linux and Mac OS X

NOTE: OpenSSL will typically already be installed on Linux and Mac OS X. If not installed, install it manually using your operating system's package manager.

Follow the steps below:

  • Generate a new private key:

     $ sudo openssl genrsa -out installdir/apache2/conf/server.key 2048
    
  • Create a certificate:

     $ sudo openssl req -new -key installdir/apache2/conf/server.key -out installdir/apache2/conf/cert.csr
    
    IMPORTANT: Enter the server domain name when the above command asks for the "Common Name".
  • Send cert.csr to the certificate authority. When the certificate authority completes their checks (and probably received payment from you), they will hand over your new certificate to you.

  • Until the certificate is received, create a temporary self-signed certificate:

     $ sudo openssl x509 -in installdir/apache2/conf/cert.csr -out installdir/apache2/conf/server.crt -req -signkey installdir/apache2/conf/server.key -days 365
    
  • Back up your private key in a safe location after generating a password-protected version as follows:

     $ sudo openssl rsa -des3 -in installdir/apache2/conf/server.key -out privkey.pem
    

    Note that if you use this encrypted key in the Apache configuration file, it will be necessary to enter the password manually every time Apache starts. Regenerate the key without password protection from this file as follows:

     $ sudo openssl rsa -in privkey.pem -out installdir/apache2/conf/server.key
    

Windows

NOTE: OpenSSL is not typically installed on Windows. Before following the steps below, download and install a binary distribution of OpenSSL.

Follow the steps below once OpenSSL is installed:

  • Set the OPENSSL_CONF environment variable to the location of your OpenSSL configuration file. Typically, this file is located in the bin/ subdirectory of your OpenSSL installation directory. Replace the OPENSSL-DIRECTORY placeholder in the command below with the correct location.

     $ set OPENSSL_CONF=C:\OPENSSL-DIRECTORY\bin\openssl.cfg
    
  • Change to the bin/ sub-directory of the OpenSSL installation directory. Replace the OPENSSL-DIRECTORY placeholder in the command below with the correct location.

     $ cd C:\OPENSSL-DIRECTORY\bin
    
  • Generate a new private key:

     $ openssl genrsa -out installdir/apache2/conf/server.key 2048
    
  • Create a certificate:

     $ openssl req -new -key installdir/apache2/conf/server.key -out installdir/apache2/conf/cert.csr
    
    IMPORTANT: Enter the server domain name when the above command asks for the "Common Name".
  • Send cert.csr to the certificate authority. When the certificate authority completes their checks (and probably received payment from you), they will hand over your new certificate to you.

  • Until the certificate is received, create a temporary self-signed certificate:

     $ openssl x509 -in installdir/apache2/conf/cert.csr -out installdir/apache2/conf/server.crt -req -signkey installdir/apache2/conf/server.key -days 365
    
  • Back up your private key in a safe location after generating a password-protected version as follows:

     $ openssl rsa -des3 -in installdir/apache2/conf/server.key -out privkey.pem
    

    Note that if you use this encrypted key in the Apache configuration file, it will be necessary to enter the password manually every time Apache starts. Regenerate the key without password protection from this file as follows:

     $ openssl rsa -in privkey.pem -out installdir/apache2/conf/server.key
    

Find more information about certificates at http://www.openssl.org.

How to enable HTTPS support with SSL certificates?

TIP: If you wish to use a Let's Encrypt certificate, you will find specific instructions for enabling HTTPS support with Let's Encrypt SSL certificates in our Let's Encrypt guide.
NOTE: The steps below assume that you are using a custom domain name and that you have already configured the custom domain name to point to your cloud server.

Bitnami images come with SSL support already pre-configured and with a dummy certificate in place. Although this dummy certificate is fine for testing and development purposes, you will usually want to use a valid SSL certificate for production use. You can either generate this on your own (explained here) or you can purchase one from a commercial certificate authority.

Once you obtain the certificate and certificate key files, you will need to update your server to use them. Follow these steps to activate SSL support:

  • Use the table below to identify the correct locations for your certificate and configuration files.

    Variable Value
    Current application URL https://[custom-domain]/
      Example: https://my-domain.com/ or https://my-domain.com/appname
    Apache configuration file installdir/apache2/conf/bitnami/bitnami.conf
    Certificate file installdir/apache2/conf/server.crt
    Certificate key file installdir/apache2/conf/server.key
    CA certificate bundle file (if present) installdir/apache2/conf/server-ca.crt
  • Copy your SSL certificate and certificate key file to the specified locations.

    NOTE: If you use different names for your certificate and key files, you should reconfigure the SSLCertificateFile and SSLCertificateKeyFile directives in the corresponding Apache configuration file to reflect the correct file names.
  • If your certificate authority has also provided you with a PEM-encoded Certificate Authority (CA) bundle, you must copy it to the correct location in the previous table. Then, modify the Apache configuration file to include the following line below the SSLCertificateKeyFile directive. Choose the correct directive based on your scenario and Apache version:

    Variable Value
    Apache configuration file installdir/apache2/conf/bitnami/bitnami.conf
    Directive to include (Apache v2.4.8+) SSLCACertificateFile "installdir/apache2/conf/server-ca.crt"
    Directive to include (Apache < v2.4.8) SSLCertificateChainFile "installdir/apache2/conf/server-ca.crt"
    NOTE: If you use a different name for your CA certificate bundle, you should reconfigure the SSLCertificateChainFile or SSLCACertificateFile directives in the corresponding Apache configuration file to reflect the correct file name.
  • Once you have copied all the server certificate files, you may make them readable by the root user only with the following commands:

     $ sudo chown root:root installdir/apache2/conf/server*
    
     $ sudo chmod 600 installdir/apache2/conf/server*
    
  • Open port 443 in the server firewall. Refer to the FAQ for more information.

  • Restart the Apache server.

You should now be able to access your application using an HTTPS URL.

How to force HTTPS redirection with Apache?

Add the following lines in the default Apache virtual host configuration file at installdir/apache2/conf/bitnami/bitnami.conf, inside the default VirtualHost directive, so that it looks like this:

<VirtualHost _default_:80>
  DocumentRoot "installdir/apache2/htdocs"
  RewriteEngine On
  RewriteCond %{HTTPS} !=on
  RewriteRule ^/(.*) https://%{SERVER_NAME}/$1 [R,L]
  ...
</VirtualHost>

After modifying the Apache configuration files:

  • Open port 443 in the server firewall. Refer to the FAQ for more information.

  • Restart Apache to apply the changes.

How to debug Apache errors?

Once Apache starts, it will create two log files at installdir/apache2/logs/access_log and installdir/apache2/logs/error_log respectively.

  • The access_log file is used to track client requests. When a client requests a document from the server, Apache records several parameters associated with the request in this file, such as: the IP address of the client, the document requested, the HTTP status code, and the current time.

  • The error_log file is used to record important events. This file includes error messages, startup messages, and any other significant events in the life cycle of the server. This is the first place to look when you run into a problem when using Apache.

If no error is found, you will see a message similar to:

Syntax OK

Updating the IP address or hostname

ELK requires updating the IP address/domain name if the machine IP address/domain name changes. The bnconfig tool also has an option which updates the IP address, called --machine_hostname (use --help to check if that option is available for your application). Note that this tool changes the URL to http://NEW_DOMAIN/elk.

$ sudo installdir/apps/elk/bnconfig --machine_hostname NEW_DOMAIN

If you have configured your machine to use a static domain name or IP address, you should rename or remove the installdir/apps/elk/bnconfig file.

$ sudo mv installdir/apps/elk/bnconfig installdir/apps/elk/bnconfig.disabled
NOTE: Be sure that your domain is propagated. Otherwise, this will not work. You can verify the new DNS record by using the Global DNS Propagation Checker and entering your domain name into the search field.

You can also change your hostname by modifying it in your hosts file. Enter the new hostname using your preferred editor.

$ sudo nano /etc/hosts
  • Add a new line with the IP address and the new hostname. Here's an example. Remember to replace the IP-ADDRESS and DOMAIN placeholders with the correct IP address and domain name.

    IP-ADDRESS DOMAIN

Troubleshooting

Elasticsearch has strict kernel requirements. You may find the issues below when starting Elasticsearch service:

ERROR: bootstrap checks failed
max file descriptors [XXX] for elasticsearch process is too low, increase to at least [65536]
max virtual memory areas vm.max_map_count [XXX] is too low, increase to at least [262144]

To avoid them, we strongly recommend you to apply these changes before installing:

  • Update the /etc/security/limits.conf file and add the lines below:

     * soft nofile 65536
     * hard nofile 65536
    
  • Update the /etc/sysctl.conf file and add this line below:

     vm.max_map_count=262144
     fs.file-max=65536
    
  • Reboot your system

How to add nodes to an Elasticsearch cluster?

To add additional nodes to a cluster, update the following configuration parameters in the node's installdir/elasticsearch/config/elasticsearch.yml file:

  • cluster.name: All the nodes should have the same cluster name to work properly.
  • node.name: The name of each node should be unique. Set meaningful names to your nodes according to their functions so it will be easier to identify them.
  • network.publish_host: The host name that a node publishes to other nodes for communication. This host should be accessible at least from the master node.
  • discovery.zen.ping.unicast.hosts: When nodes are in the same sub-network, they will auto-configure themselves into a cluster. In other cases, specify a list with your nodes in this parameter.

Refer to the Elasticsearch official documentation for more information.

How to make the Kibana dashboard public?

NOTE: For security reasons, we do not recommend disabling authentication.

By default, you will be prompted for a username and password every time you access the Kibana dashboard. This can create problems if, for example, you wish to embed Kibana data in other pages. To disable the authentication prompt, follow these steps:

  • Edit the Apache configuration file at installdir/elasticsearch/apache-conf/elasticsearch.conf and remove the LocationMatch section.

  • Restart Apache by running the command below:

     $ sudo installdir/ctlscript.sh restart apache
    

How to install elasticsearch-head?

Elasticsearch-head is a Web front-end for an Elasticsearch cluster. For Elasticsearch 5.x, site plugins are not supported, so it needs to run as a standalone server. Follow these steps:

  • Install Node.js and npm. For example, the commands below will install them on Debian:

     $ sudo apt install nodejs-legacy npm
    
  • Download the elasticsearch-head ZIP file and decompress it:

     $ wget https://github.com/mobz/elasticsearch-head/archive/master.zip
     $ unzip master.zip
    
  • Install the modules and run the service:

     $ cd elasticsearch-head-master
     $ npm install
     $ ./node_modules/grunt/bin/grunt server &
    
  • Update the installdir/elasticsearch/config/elasticsearch.yml file and enable CORS by setting http.cors.enabled to true:

     http.cors.enabled: true
    
  • In the same file, set the http.cors.allow-origin variable to the domains that are allowed to send cross-origin requests. If you prepend and append a "/" to the value, this will be treated as a regular expression. For example:

     http.cors.allow-origin: /https?:\/\/localhost(:[0-9]+)?/
    
NOTE: You can set the value of http.cors.allow-origin to "*" to allow CORS requests from anywhere if you wish. However, this is not recommended as it is a security risk.
  • Add Apache configuration for elasticsearch-head to installdir/elasticsearch/apache-conf/elasticsearch.conf:

     ProxyPass        /elasticsearch-head http://127.0.0.1:9100
     ProxyPassReverse /elasticsearch-head http://127.0.0.1:9100
    
  • Restart the services:

     $ sudo installdir/ctlscript.sh restart apache
    
  • Browse to http://localhost/elasticsearch-head/?base_uri=http://localhost/elasticsearch and insert your Elasticsearch credentials. You should see something like the screenshot below:

elasticsearch-head interface

Which components are installed with the Bitnami ELK Stack?

The Bitnami ELK Stack ships the components listed below. If you want to know which specific version of each component is bundled in the stack you are downloading, check the README.txt file on the download page or in the stack installation directory. You can also find more information about each component using the links below.

Main components

nativeInstaller

Bitnami Documentation