Deploy your Bitnami Kafka Stack on 1&1 Cloud Platform now! Launch Now

Bitnami Kafka for 1&1 Cloud Platform

Description

Apache Kafka is publish-subscribe messaging rethought as a distributed commit log.

First steps with the Bitnami Kafka Stack

Welcome to your new Bitnami application running on 1&1! Here are a few questions (and answers!) you might need when first starting with your application.

What credentials do I need?

You need two sets of credentials:

  • The application credentials, consisting of a username and password. These credentials allow you to log in to your new Bitnami application.

  • The server credentials, consisting of an SSH username and password. These credentials allow you to log in to your 1&1 Cloud Platform server using an SSH client and execute commands on the server using the command line.

What is the administrator username set for me to log in to the application for the first time?

Username: user

What is the administrator password?

What SSH username should I use for secure shell access to my application?

SSH username: root

How do I get my SSH key or password?

What are the default ports?

A port is an endpoint of communication in an operating system that identifies a specific process or a type of service. Bitnami stacks include several services or servers that require a port.

Remember that if you need to open some ports you can follow the instructions given in the FAQ to learn how to open the server ports for remote access.

Port 22 is the default port for SSH connections.

The Kafka access port is 9092. This port is closed by default, you must open it to enable remote access.

How to start or stop the services?

Each Bitnami stack includes a control script that lets you easily stop, start and restart services. The script is located at /opt/bitnami/ctlscript.sh. Call it without any service name arguments to start all services:

$ sudo /opt/bitnami/ctlscript.sh start

Or use it to restart a single service, such as Apache only, by passing the service name as argument:

$ sudo /opt/bitnami/ctlscript.sh restart apache

Use this script to stop all services:

$ sudo /opt/bitnami/ctlscript.sh stop

Restart the services by running the script without any arguments:

$ sudo /opt/bitnami/ctlscript.sh restart

Obtain a list of available services and operations by running the script without any arguments:

$ sudo /opt/bitnami/ctlscript.sh

How to upload files to the server with SFTP?

NOTE: Bitnami applications can be found in /opt/bitnami/apps.
  • If you are using the Bitnami Launchpad for 1&1 Cloud Platform, obtain your SSH credentials by following these steps:

    • Browse to the Bitnami Launchpad for 1&1 and sign in if required using your Bitnami account.
    • Select the "Virtual Machines" menu item.
    • Select your cloud server from the resulting list.
    • Note the server IP address and SSH credentials on the resulting page.

      SSH credentials

  • If you are using the 1&1 Control Panel, obtain your SSH credentials by following these steps:

    • Log in to the 1&1 Control Panel.
    • Navigate to the "Infrastructure -> Servers" section.
    • Look through the list of servers until you find the server you wish to modify. Click the server name.
    • In the "Features -> Server access" section, note the SSH username and click the "Show Password" link to obtain the corresponding SSH password.

      SSH credentials

Although you can use any SFTP/SCP client to transfer files to your server, this guide documents FileZilla (Windows, Linux and Mac OS X), WinSCP (Windows) and Cyberduck (Mac OS X).

Using a Password

Once you have your server's SSH credentials, choose your preferred application and follow the steps below to connect to the server using SFTP.

FileZilla

Follow these steps:

  • Download and install FileZilla.
  • Launch FileZilla and use the "File -> Site Manager -> New Site" command to bring up the FileZilla Site Manager, where you can set up a connection to your server.
  • Enter your server host name.
  • Select "SFTP" as the protocol and "Ask for password" as the logon type. Specify root as the user name and enter the server password.

    FileZilla configuration

  • Use the "Connect" button to connect to the server and begin an SFTP session. You might need to accept the server key, by clicking "Yes" or "OK" to proceed.

You should now be logged into the /root directory on the server. You can now transfer files by dragging and dropping them from the local server window to the remote server window.

If you have problems accessing your server, get extra information by use the "Edit -> Settings -> Debug" menu to activate FileZilla's debug log.

FileZilla debug log

WinSCP

Follow these steps:

  • Download and install WinSCP.
  • Launch WinSCP and in the "Session" panel, select "SFTP" as the file protocol.
  • Enter your server host name and specify root as the user name.

    WinSCP configuration

  • From the "Session" panel, use the "Login" button to connect to the server and begin an SCP session. Enter the password when prompted.

    WinSCP configuration

You should now be logged into the /root directory on the server. You can now transfer files by dragging and dropping them from the local server window to the remote server window.

Cyberduck

Follow these steps:

  • Select the "Open Connection" command and specify "SFTP" as the connection protocol.

    Cyberduck configuration

  • In the connection details panel, enter the server IP address, the username root and the SSH password.

    Cyberduck configuration

  • Use the "Connect" button to connect to the server and begin an SFTP session.

You should now be logged into the /root directory on the server. You can now transfer files by dragging and dropping them from the local server window to the remote server window.

What is the default configuration?

Kafka default configuration

Kafka configuration files

The Kafka configuration files are located at the /opt/bitnami/kafka/config/ directory.

Kafka ports

The Kafka server has a single broker running on port 9092. Only conections from localhost are permitted.

Kafka log files

The Kafka log files are created at the /opt/bitnami/kafka/logs/ directory.

Zookeeper default configuration

Zookeeper configuration files

The Zookeeper configuration files are located at the /opt/bitnami/zookeeper/conf/ directory.

Zookeeper ports

By default, the Zookeeper server runs on port 2181. Only conections from localhost are permitted.

How to create a Kafka multi-broker cluster?

This section describes the creation of a multi-broker Kafka cluster with brokers located on different hosts. In this scenario:

  • One server hosts the Zookeeper server and a Kafka broker
  • The second server hosts a a second Kafka broker
  • The third server hosts a producer and a consumer

Kafka cluster

NOTE: Before beginning, ensure that ports 2181 (Zookeeper) and 9092 (Kafka) are open on the first server and port 9092 (Kafka) is open on the second server. Also ensure that remote connections are possible between the three servers (instructions).

Configuring the first server (Zookeeper manager and Kafka broker)

The default configuration may be used as is. However, you must perform the steps below:

  • Delete the contents of the Zookeeper and Kafka temporary directories

     $ sudo rm -rf /opt/bitnami/kafka/tmp/kafka-logs
     $ sudo rm -rf /opt/bitnami/zookeeper/tmp/zookeeper
    
  • Restart the Kafka and Zookeeper services.

     $ sudo /opt/bitnami/ctlscript.sh restart kafka
     $ sudo /opt/bitnami/ctlscript.sh restart zookeeper
    

Configuring the second server (Kafka broker)

  • Edit the /opt/bitnami/kafka/config/server.properties configuration file and update the broker.id parameter.

     broker.id = 1
    

    This broker id must be unique in the Kafka ecosystem.

  • In the same file, update the zookeeper.connect parameter to reflect the public IP address of the first server.

     zookeeper.connect=PUBLIC_IP_ADDRESS_OF_ZOOKEEPER_MANAGER:2181
    
  • Delete the contents of the Zookeeper and Kafka temporary directories

     $ sudo rm -rf /opt/bitnami/kafka/tmp/kafka-logs
     $ sudo rm -rf /opt/bitnami/zookeeper/tmp/zookeeper
    
  • Stop the Zookeeper service.

     $ sudo /opt/bitnami/ctlscript.sh stop zookeeper
    
  • Restart the Kafka service.

     $ sudo /opt/bitnami/ctlscript.sh restart kafka
    

Configuring the third server (Kafka message producer/consumer)

  • Edit the /opt/bitnami/kafka/config/producer.properties file and update the metadata.broker.list parameter with the public IP addresses of the two brokers:

     metadata.broker.list=PUBLIC_IP_ADDRESS_OF_FIRST_KAFKA_BROKER:9092, PUBLIC_IP_ADDRESS_OF_SECOND_KAFKA_BROKER:9092
    
  • Edit the /opt/bitnami/kafka/config/consumer.properties file and update the zookeeper.connect parameter to reflect the public IP address of the first server.

     zookeeper.connect=PUBLIC_IP_ADDRESS_OF_ZOOKEEPER_MANAGER:2181
    
  • Since this host only serves as a producer and a consumer, stop the Kafka and Zookeeper services:

     $ sudo /opt/bitnami/ctlscript.sh stop kafka
     $ sudo /opt/bitnami/ctlscript.sh stop zookeeper
    

Testing the cluster

NOTE: The following commands should be executed on the third server (Kafka message producer/consumer).
  • Create a new topic.

     $ /opt/bitnami/kafka/bin/kafka-topics.sh --create --zookeeper PUBLIC_IP_ADDRESS_OF_FIRST_KAFKA_BROKER:2181 --replication-factor 2 --partitions 1 --topic multiBroker
    
  • Produce some messages by running the command below and then entering some messages, each on a separate line. Enter Ctrl-C to end.

     $ /opt/bitnami/kafka/bin/kafka-console-producer.sh --broker-list PUBLIC_IP_ADDRESS_OF_FIRST_KAFKA_BROKER:9092 --topic multiBroker
     this is a message
     this is another message
     ^C
    
  • Consume the messages. The consumer will connect to the cluster and retrieve and display the messages you entered in the previous step.

     $ /opt/bitnami/kafka/bin/kafka-console-consumer.sh --zookeeper PUBLIC_IP_ADDRESS_OF_FIRST_KAFKA_BROKER:2181 --topic multiBroker --from-beginning
     this is a message
     this is another message
     ^C
    

How to run a Kafka producer and consumer from the server itself?

You can run the following example to publish and collect your first message:

  • Declare a new topic. The Kafka server is configured to use the server's public IP address:

     $ /opt/bitnami/kafka/bin/kafka-topics.sh --create --zookeeper 127.0.0.1:2181 --replication-factor 1 --partitions 1 --topic test
    

    We use --replication-factor to indicate how many servers are going to have a copy of the logs, and --partitions to choose the number of partitions for the topic we are creating.

  • Start a new producer on the same Kafka server and generates a message in the topic. Remember to replace SERVER-IP with your server's public IP address. Enter CTRL-D to end the message.

     $ /opt/bitnami/kafka/bin/kafka-console-producer.sh --broker-list SERVER-IP:9092 --topic test
    
     this is my first message 
    
  • Collect and display the first message in the consumer:

     $ /opt/bitnami/kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
    

How to connect to Kafka from a different machine?

For security reasons, the Kafka ports in this solution cannot be accessed over a public IP address. To connect to Kafka and Zookeeper from a different machine, you must open ports 9092 and 2181 for remote access. Refer to the FAQ for more information on this.

IMPORTANT: Making this application's network ports public is a significant security risk. You are strongly advised to only allow access to those ports from trusted networks. If, for development purposes, you need to access from outside of a trusted network, please do not allow access to those ports via a public IP address. Instead, use a secure channel such as a VPN or an SSH tunnel. Follow these instructions to remotely connect safely and reliably.

Once you have added the firewall rule and opened the ports, perform these additional steps:

  • Edit your Zookeeper configuration file (/opt/bitnami/zookeeper/conf/zoo.cfg) and comment out the following line:

     clientPortAddress=localhost
    
  • Edit your Kafka configuration file (/opt/bitnami/kafka/config/server.properties). If necessary, uncomment the following line and change the value of the parameter to the public IP address of the server:

     #advertised.host.name=<hostname routable by clients>
    
  • Restart the server to reload the configuration files.

     $ sudo /opt/bitnami/ctlscript.sh restart
    

How to debug Kafka and Zookeeper errors?

The main Kafka log file is created at /opt/bitnami/kafka/logs/server.log.

The main Zookeeper log file is created at /opt/bitnami/zookeeper/tmp/zookeeper.out.

oneone