In recent years, containers have become the definitive way to develop applications because they provide packages that contain everything you need to run your applications: code, runtime, databases, system libraries, etc.
Kubernetes is an open source solution for managing application containers. With Kubernetes, you can decide when your containers should run, increase, or decrease the size of application containers or check the resource consumption of your application deployments. The Kubernetes project is based on Google's experience working with containers, and it is gaining momentum as the easiest and the most recommended way to manage containers in production.
Applications can be installed in Kubernetes using Helm charts. Helm charts are packages that contain all the information that Kubernetes needs to know for managing a specific application within the cluster.
This guide will walk you, step by step, through the process of running Kubernetes locally using Minikube and running it in the cloud via Google Container Engine for deploying applications within a Kubernetes cluster using Bitnami Helm charts.
Kubernetes is an open source project designed specifically for container orchestration. Kubernetes offers a number of key features, including multiple storage APIs, container health checks, manual or automatic scaling, rolling upgrades and service discovery. Applications can be installed to a Kubernetes cluster via Helm charts, which provides streamlined package management functions.
If you're new to this container-centric infrastructure, the easiest way to get started with Kubernetes is with Bitnami Helm charts. Bitnami offers a number of stable, production-ready Helm charts to deploy popular software applications in a Kubernetes cluster.
Kubernetes Native App architecture
The architecture of a typical Cloud-Native application consists of 3-tiers: a persistence or database tier, backend tier and frontend tier for your application. In Kubernetes, you define and create multiple resources for each of these tiers, as you can see in the following diagram:
Working with Kubernetes
Kubernetes is platform-agnostic and integrates with a number of different cloud providers, allowing you to pick the platform that best suits your needs. Check Kubernetes official documentation to find out the right solution for you.
There are two different interfaces from which you can manage the resources on your cluster:
The official Helm webpage defines Helm as a "package manager for Kubernetes" but, it is more than this. Helm is a tool for managing applications that run in the Kubernetes cluster manager.Helm provides a set of operations that are useful for managing applications, such as: inspect, install, upgrade and delete.
Helm is composed of two parts:
- A client: Helm
- A server: Tiller
Helm Charts: applications ready-to-deploy
Helm charts are packages of pre-configured Kubernetes resources. A Helm chart describes how to manage a specific application on Kubernetes. It consists of metadata that describes the application, plus the infrastructure needed to operate it in terms of the standard Kubernetes primitives. Each chart references one or more (typically Docker-compatible) container images that contain the application code to be run.
Helm charts contains at least these two elements:
- A description of the package (chart.yml).
- One or more templates, which contains Kubernetes manifest files.
Despite the fact you can run application containers using the Kubernetes command line (kubectl), the easiest way to run workloads in Kubernetes is using the ready-made Helm charts. Helm charts simply indicate to Kubernetes how to perform the application deployment and how to manage the container clusters.
Bitnami provides a set of stable and tested charts that give you the opportunity to deploy popular software application with ease and confidence.
This tutorial will take you through the configuration and installation of the tools you will need in order to run applications in Kubernetes. Therefore, two solutions have been picked:
- Google Container Engine (GKE)
Minikube is the official way to run Kubernetes locally. It is a tool that runs a single-node Kubernetes cluster inside a Virtual Machine (VM) on your computer. It is an easy way to try out Kubernetes and is also useful for testing and development scenarios.
Google Container Engine (also known as GKE) is a cluster manager and orchestration system for running Docker containers in the cloud. It is a production-ready environment with guaranteed uptime, load balancing and included container networking features. It allows you to create multiple-node clusters while also providing access to all Kubernetes' features.
For more information on how to run a Kubernetes cluster on other platforms, such as AWS, please refer to Heptio's Quick Start guide.
In this tutorial, you will learn how to install the needed requirements to run Bitnami applications on Kubernetes using Minikube and GKE.
Here are the steps you'll follow in this tutorial:
- Configure the platform
- Create a Kubernetes cluster
- Install the kubectl command-line tool
- Install Helm and Tiller
- Install an application using a Helm chart
- Access the Kubernetes Dashboard
- Uninstall an application using Helm
The next sections will walk you through these steps in detail.
Assumptions and prerequisites
This guide focuses on deploying Bitnami applications in a Kubernetes cluster running on either Google Container Engine (GKE) or Minikube. The example applications are Redis, MongoDB, Odoo and WordPress.
This guide makes the following assumptions:
Option 1: Using Minikube
Option 2: Using GKE
You already have an account and a project created in Google Cloud Platform.
You have Cloud SDK (the Google command line interface for Google Cloud platform) installed.
|NOTE: GKE is recommended for production deployments because it is a production-ready environment with guaranteed uptime, loadbalancing and included container networking features. That said, the commands shown in this guide can be used on both GKE and Minikube. Commands specific to one or the other platform are explicitly called out as such.|
Step 1: Configure the platform
The first step for working with Kubernetes clusters is to: 1) have Minikube installed if you have selected to work locally or 2) configure the gcloud command-line tool if you prefer to run your cluster containers in the cloud.
Option 1: Install Minikube
Install Minikube in your local system, either by using a virtualization software such as VirtualBox or a local terminal.
Browse to the Minikube latest releases page.
Select the distribution you wish to download depending on your Operating System.
|NOTE: This tutorial assumes that you are using Mac OSX or Linux OS. The Minikube installer for Windows is under development. To get an experimental release of Minikube for Windows, check the Minikube releases page.|
Open a new console window on the local system or open your VirtualBox.
To obtain the latest Minikube release, execute the following command depending on your OS. Remember to replace the X.Y.Z and OS_DISTRIBUTION placeholders with the latest version and software distribution of Minikube respectively. Check the Minikube latest releases page for more information on this.
$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/vX.Y.Z/minikube-OS_DISTRIBUTION-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
Option 2: Install and configure the gcloud command-line tool
Once you have installed the Google Cloud SDK, follow these instructions to initialize and configure the gcloud CLI:
From a console window on your local window, start the authentication process by running the following command:
$ gcloud init
You will be prompted for the information below:
- Enter the Google Cloud Platform account and project you wish to connect to the Google Compute Engine.
Configure Google Compute Engine by entering the compute zone.
IMPORTANT: The compute zone you have entered for configuring Google Compute Engine must match with the zone you will specify when you create the Kubernetes cluster.
Now that your gcloud CLI is set up, your local system is communicating with your Google Cloud Platform account and project. After completing this step you will be able to create a Kubernetes cluster.
Step 2: Create a Kubernetes cluster
Once your platform is installed and configured, you are able to create a Kubernetes cluster. Follow the appropriate set of instructions depending on whether you're using Minikube or GKE:
Option 1: Create a cluster using Minikube
By starting Minikube, a single-node cluster is created. Run the following command in your terminal to complete the creation of the cluster:
$ minikube start
To run your commands against Kubernetes clusters, the kubectl CLI is needed. Check step 3 to complete the installation of kubectl.
Option 2: Create a cluster on GKE
To create a Kubernetes cluster, follow the instructions below:
Run the commands below. Remember to replace MY-KUBERNETES-CLUSTER with the name you chose for your cluster.
$ gcloud container clusters create MY_KUBERNETES_CLUSTER \ --enable-cloud-logging \ --enable-cloud-monitoring \
NOTE: The number of nodes created by default in a Kubernetes cluster is 3. You can add additional arguments to set up some parameters such as a specific number of nodes or the disk size:
$ --num-nodes N $ --disk-size N
Once the cluster has been created you will see the information related to it:
The new cluster will also appear in the Container Engine -> Container cluster section within Google Cloud Platform:
Configure the compute zone. Remember that this zone must match the one you indicated in Step 1. Replace the COMPUTE_ZONE placeholder with the correct value.
$ gcloud config set compute/zone COMPUTE_ZONE
Get your cluster credentials:
$ gcloud container clusters get-credentials MY_KUBERNETES_CLUSTER
Authenticate your cluster:
$ gcloud auth application-default login
After you have run the command below, Google will prompt you to introduce your Google account and allow Google Auth Library to access your account information:
Now, your cluster is authenticated with the Google Cloud SDK. You can install the kubectl CLI and start to work with your Kubernetes cluster.
Step 3: Install the kubectl command-line tool
In order to start working on a Kubernetes cluster, it is necessary to install the Kubernetes command line (kubectl). Follow these steps to install the kubectl CLI:
Execute the following commands to install the kubectl CLI. OS_DISTRIBUTION is a placeholder for the binary distribution of kubectl, remember to replace it with the corresponding distribution for your Operating System (OS).
$ curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/OS_DISTRIBUTION/amd64/kubectl $ chmod +x ./kubectl $ sudo mv ./kubectl /usr/local/bin/kubectl
|TIP: Once the kubectl CLI is installed, you can obtain information about the current version with the kubectl version command.|
|NOTE: If you are working in GKE, you can also install kubectl by using the gcloud components install kubectl command.|
Check that kubectl is correctly installed and configured by running the kubectl cluster-info command:
$ kubectl cluster-info
NOTE: The kubectl cluster-info command shows the IP addresses of the Kubernetes node master and its services.
You can also verify the cluster by checking the nodes. Use the following command to list the connected nodes:
$ kubectl get nodes
To get complete information on each node, run the following:
$ kubectl describe node
Step 4: Install Helm and Tiller
The easiest way to run and manage applications in a Kubernetes cluster is using Helm. Helm allows you to perform key operations for managing applications such as install, upgrade or delete. As previously mentioned, Helm is composed of two parts: Helm (the client) and Tiller (the server). Follow the steps below to complete both Helm and Tiller installation.
To install Helm, run the following commands:
$ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh $ chmod 700 get_helm.sh $ ./get_helm.sh
TIP: If you are using OS X you can install it with the brew install command: brew install kubernetes-helm.
To complete the installation of Tiller run the command below:
$ helm init
In case you have already installed Helm, you can upgrade Tiller with this command:
$ helm init --upgrade
Check if Tiller is correctly installed by checking the output of kubectl get pods as shown below:
$ kubectl --namespace kube-system get pods | grep tiller tiller-deploy-2885612843-xrj5m 1/1 Running 0 4d
Once you have installed Helm, a set of useful commands to perform common actions is shown below:
Step 5: Install an application using a Helm Chart
A Helm chart describes a specific version of an application, also known as a "release". The "release" includes files with Kubernetes-needed resources and files that describe the installation, configuration, usage and license of a chart. The steps below show how to run the following Bitnami applications using Helm charts:
These are just some concrete examples of application releases. Find more Bitnami charts.
By executing the helm install command the application will be deployed on the Kubernetes cluster. You can install more than one chart across the cluster or clusters.
|IMPORTANT: If you don't specify a release name with the --name option one will be automatically assigned.|
You can find an example of the installation of Redis on GKE using Helm charts below:
$ helm install stable/redis
|NOTE: Check the configurable parameters of the Redis chart and their default values at the official Kubernetes GitHub repository.|
Once you have the chart installed a "Notes" section will be shown at the bottom of the installation information. It contains important instructions about how to obtain your application's IP address or credentials. Please check it carefully:
TIP: Unlike cloud platforms, Minikube doesn't support a load balancer so, if you're deploying the application on Minikube, use the command below instead:
$ helm install stable/redis --set serviceType=NodePort
You should see output similar to that shown below as the chart is installed on Minikube.
|IMPORTANT: When deploying on Minikube, you may see errors such as CrashLoopBackOff and your application may fail to start. This is typically an indication that the cluster does not have sufficient memory available to it. To resolve this error, make more resources available to the cluster (we recommend a minimum of 3 GB RAM) and try again.|
Find how to install MongoDB, Odoo or WordPress in the examples below:
To install the most recent MongoDB release on GKE, run the following command:
$ helm install stable/mongodb
To install on Minikube, run the following command:
$ helm install stable/mongodb --set serviceType=NodePort
NOTE: Check the configurable parameters of the MongoDB chart and their default values at the official Kubernetes GitHub repository.
To install the most recent Odoo release on GKE, run the following command:
$ helm install stable/odoo
To install on Minikube, run the following command:
$ helm install stable/odoo --set serviceType=NodePort
NOTE: Check the configurable parameters of the Odoo chart and their default values at the official Kubernetes GitHub repository.
To install the most recent WordPress release, run the following command:
$ helm install stable/wordpress --set mariadb.image=bitnami/mariadb:10.1.21-r0
To install on Minikube, run the following command:
$ helm install stable/wordpress --set mariadb.image=bitnami/mariadb:10.1.21-r0 --set serviceType=NodePort
NOTE: Check the configurable parameters of the WordPress chart and their default values at the official Kubernetes GitHub repository.
Now, you can manage your deployments from the Kubernetes Dashboard. Follow the instructions below to access the Web user interface.
Step 6: Access the Kubernetes Dashboard
The Kubernetes Dashboard is a Web user interface from which you can manage your clusters in a more simple and digestible way. It provides information on the cluster state, deployments and container resources. You can also check both the credentials and the log error file of each pod within the deployment.
To open the Kubernetes Dashboard, you just need to run the following command:
$ kubectl proxy
With this command, you will create a proxy server on port 8001 for accessing the Kubernetes Dashboard. It will be available at: localhost:8001/ui. The home screen shows the "Workloads" section. Here you get an overview of the following cluster elements:
- CPU usage
- Memory usage
- Replica Sets
From this home screen, you can perform some basic actions such as:
- Monitoring the status of your deployments and pods.
- Checking pod and container(s) logs to identify possible errors during the creation of the containers.
- Finding application credentials.
Monitor the status of Deployments and Pods
- To check detailed information about the status of your deployments, navigate to the "Workloads -> Deployments" section located on the left menu. It shows a screen with a graphical representation of the CPU and memory usage, as well as a list of all deployments you have in your cluster.
- Click each deployment to obtain detailed information of the selected deployment:
Pods are the smallest units in Kubernetes deployments. They can contain one or multiple containers (that need to share resources in order to work together). Learn more about pods.
When you click on a pod in the "Workloads -> Pods", you access the pod list. By selecting a pod, you will see the "Details" section that contains information related to the pod,and a "Containers" section that includes the information related to this pod's container(s).
Follow these instructions to access pod and container information:
- To check the status of your deployments in detail, navigate to the "Workloads -> Pods" section located on the left menu. It shows the pod list:
- Click the pod you'd like to access further details for.
The Kubernetes Dashboard allows you to check the logs of both the pod and any containers belonging to the pod to detect possible errors that might have occurred. To access the logs viewer, follow the steps below:
Navigate to the "Workloads -> Pods" section located on the left menu, select the pod you'd like to check from the pod list.
In the detail page of the selected pod, you will find a "View logs" link both in the "Details" and "Containers" section. Click the one you want to see:
The logs viewer opens:
Find application credentials
The application credentials are shown in the "Notes" section after installing the application chart:
You can get the application username and password at any time by running the following command:
$ kubectl describe po
As you can see in the image above, the application password is configured as a secret password. To get it, browse to the Kubernetes Dashboard and follow these instructions:
Navigate to the "Config -> Secrets" section located on the left menu.
Click the application for which you wish to obtain the credentials.
In the "Data" section, click the eye icon to see the password:
Step 7: Uninstall an application using Helm
To uninstall an application, you need to run the helm delete command. Every Kubernetes resource that is tied to that release will be removed.
|TIP: To get the release name, you can run the helm list command.|
$ helm delete MY_RELEASE
|NOTE: Remember that MY_RELEASE is a placeholder, replace it with the name you have used during the chart installation process.|
To learn more about the topics discussed in this guide, use the links below:
- Production-ready Kubernetes applications by Bitnami
- Intro to deploying your favourite apps on Kubernetes
- How to run Kubernetes on Amazon Web Services (AWS)
- Google Container Engine
|TIP: If you already have a Kubernetes cluster created with some running application deployments, you can take a step further and try to scale and upgrade them. For detailed instructions, refer to our how to tutorial for deploying, scaling and upgrading an application on Kubernetes with Helm.|
Do you want to learn more about how to deploy Bitnami applications on Kubernetes? Watch the following video for getting started with Kubernetes architecture and Helm charts: