Install the AKS Engine on Azure Stack to Deploy a Kubernetes Cluster

Introduction

Azure Stack enables users to extend Azure services to their on-premise infrastructure. This makes it possible to operate consistently accross diferent environments such as datacenters, edge locations, remote offices, and the cloud.

With the AKS Engine you can create and manage Kubernetes clusters on Azure and Azure Stack and deploy applications to those clusters using, for example, Helm charts.

This guide shows you how to install the AKS Engine on Azure Stack and provision a Kubernetes cluster, which you can then use for application deployments.

Tip

If you have already installed AKS on Azure Stack and created a fresh Kubernetes v1.8+ cluster, check out this guide to learn how to quickly deploy Bitnami applications on AKS and Azure Stack with Kubeapps.

Assumptions and prerequisites

This article assumes that:

This article will walk you through the process of creating a Linux client virtual machine (VM) that you will use to install the AKS Engine. Then, you will be able to deploy a Kubernetes cluster and install the kubectl command line tool (CLI) and Helm to start managing your cluster. As best practices, this guide also recommends removing SSH access to the master nodes, installing an NGINX Ingress Controller for load balancing Kubernetes services, and deploying cert-manager to secure the cluster.

Watch the following video or keep reading this tutorial to learn more:

Step 1: Provision the client virtual machine (VM)

To use the AKS Engine CLI, it is necessary to have a Windows or Linux workstation. This guide uses an Ubuntu-based virtual machine deployed on Azure Stack. The main function of this VM is to serve as a basis for downloading the tools you need to deploy and test an AKS cluster. To start, follow the instructions below:

  • Log in to your Azure Stack Portal. (This URL will be provided by your Azure Stack operator, either a service provider or an administrator in your organization).
  • Navigate to the "Resource Groups" section to create a new resource group for your virtual machine and all Kubernetes resources.
Resource Groups
  • Click "Add". In the resulting screen, enter a name for your resource group, select a subscription, and a resource group location. Click "Create" to finish the process.
Add a resource group
  • In a few minutes, your new resource group will be displayed in the list of resources groups associated with your subscription.
List of resource groups
  • Select the resource group, then click the "Create resources" button. You will be redirected to the Marketplace.
  • Select a server image. This guide uses an Ubuntu Server 18.04 image.
Marketplace
  • In the "Create virtual machine -> Basics" section, complete the required fields. Select the resource group you created and click "OK" to continue.
Configure basic settings
  • Select the size for your cloud server. For more information, refer to the Microsoft Azure pricing sheet.
  • The next step is to configure optional features such as the virtual network or the network security group of the cloud server. Scroll down to find the "Select public inbound ports" section. Select the port 22 to allow SSH access.
Configure SSH access
  • In the "Choose virtual network" section, click "Create new". Enter a name for the virtual network, specify the range of IP addresses for that network. Repeat the same process for creating a new subnet. Click "OK" to continue.
Create Virtual Network
  • At this point, Azure will run a final validation. On the resulting page, you can see a summary of the proposed virtual machine deployment. Click "OK" to accept the current configuration for the virtual machine. This action starts the deployment of your Ubuntu server.
Summary

Azure Stack will now begin spinning up the new server. A notification will appear indicating the current status. The process usually takes a few minutes.

Go to resource group

Once the server has been provisioned, click "Go to resource" to check the resource group you just created.

Virtual Machine resources
  • Click on the name of the virtual machine to obtain its IP address. You will need it later to connect to the server through SSH.
Virtual Machine IP address

Step 2: Install the AKS Engine

The next step is to connect to the client virtual machine through SSH to start the installation of the AKS Engine. Follow the steps below:

  • Open a terminal window and execute the following command to check that the SSH port is publicly available. Replace SERVER-IP with the IP address you obtained in the last step.

    nc -zv SERVER-IP 22
    
  • Connect to the virtual machine through SSH by executing the command below. Remember to replace the USERNAME placeholder with the username you entered when creating the server, and the SERVER-IP placeholder with the IP address of the virtual machine that you get in the last step.

    ssh USERNAME@SERVER-IP
    
  • Once you have accessed the server, create a folder to store the files that will contain cluster specifications, SSH key pair, and certificates.

    mkdir aks-resources
    cd aks-resources
    
  • The next step is to create an SSH RSA key pair for the cluster to use. This key pair is used to configure SSH access to the Kubernetes cluster nodes. To do so, execute the following command:

    ssh-keygen -t rsa -b 2048 -C "AKS nodes ssh-rsa keypair" -f ./aks-keys
    
  • Now, it is time to install the AKS Engine CLI. Find the version of AKS Engine in the Supported Kubernetes version table. In this guide, this is the command used:

    curl -o get-akse.sh https://raw.githubusercontent.com/Azure/aks-engine/master/scripts/get-akse.sh
    chmod 700 get-akse.sh
    ./get-akse.sh --version v0.43.0
    
  • To test that the engine was successfully installed, run aks-engine version.

    aks-engine version
    
    Version: v0.43.0
    GitCommit: 8928a4094
    GitTreeState: clean
    

The AKS Engine CLI is installed. You can now use it to deploy a Kubernetes cluster.

Step 3: Deploy a Kubernetes cluster with the AKS Engine CLI

The first step you should perform before deploying a Kubernetes cluster in your Azure Stack is to define the cluster specification. The default architecture of the cluster is comprised of three virtual machines acting as master nodes, three virtual machines acting as worker nodes, and two load balancers - one placed between master nodes and the public Internet and another internal between master and worker nodes.

To define the cluster specification, follow these instructions:

  • Download the JSON file that contains the AKS cluster specification.

  • Open the kubernetes-azurestack.json file with your favorite text editor. You will see that the default cluster configuration is comprised of three master and three worker nodes to ensure high-availability in the control pane. You can leave these values as are, as well as the API version.

  • Edit the following values to update the API model and adapt it to your cluster specifications:

    • portalURL: Provide the URL to the tenant portal
    • dnsPrefix: Enter a DNS prefix for your cluster
    • keyData: Add the cluster public SSH key previously created
      },
      "customCloudProfile": {
          "portalURL": "https://MY.PORTAL.URL",
           "identitySystem": ""
    
    [...]
    
      },
       "masterProfile": {
          "dnsPrefix": ""MY-DNS-PREFIX,
          "distro": "aks-ubuntu-16.04",
          "count": 3,
          "vmSize": "Standard_D2_v2"
    
    [...]
    
      ],
      "linuxProfile": {
          "adminUsername": "azureuser",
          "ssh": {
              "publicKeys": [
                  {
                     "keyData": "ssh-rsa MY-PUBLIC-KEY-PAIR"
    
    [...]
    

Once you have added these specifications to the API model document, you can deploy the Kubernetes cluster by running the following command. Replace the RESOURCE-GROUP-NAME and OUTPUT-FOLDER-NAME placeholders with the name of the resource group where you created the client server and a name to identify the folder where the outputs of this command will be stored, respectively. The rest of the placeholders should be provided by a service provider or your organization's administrator.

aks-engine deploy \
--azure-env AzureStackCloud \
--location westus \
--resource-group RESOURCE-GROUP-NAME \
--api-model ./kubernetes-azurestack.json \
--output-directory OUTPUT-FOLDER-NAME \
--client-id AZURE-STACK-CLIENT-ID \
--client-secret AZURE-STACK-CLIENT-SECRET \
--subscription-id AZURE-STACK-SUBSCRIPTION-ID

This action will take about 30 minutes or more. You can check all the new resources associated with your cluster by navigating to the Azure Stack portal and checking the server's resource group.

Resource Groups

Configure access to the cluster

To access the Kubernetes cluster, it is necessary to use the kubeconfig file where cluster specifications such as users and contexts were defined. Follow these instructions:

  • Create a copy in your local machine of the kubeconfig.json file created in the client virtual machine when deploying the cluster. You will find it under the kubernetes-certs/kubeconfig/ directory.

  • Run the following command by replacing PATH-TO-KUBECONFIG with the path where the local copy of the kubeconfig.json file is stored.

export KUBECONFIG=PATH-TO-KUBECONFIG/kubeconfig.json
  • Check the status of the cluster by executing the following:

    kubectl get componentstatuses
    

    You will see an output message:

    NAME                 STATUS    MESSAGE             ERROR
    controller-manager   Healthy   ok
    scheduler            Healthy   ok
    etcd-0               Healthy   {"health":"true"}
    

Install the kubectl command-line tool and Helm

To start using and configuring your cluster, it is recommended to install both the kubectl command-line tool (CLI) and Helm.

Step 4: Secure the cluster

Once the cluster is running in your Azure stack and configured to work with kubectl and Helm, there are some best practices that you should follow in order to secure it and avoid the exposure of the master nodes to the Internet. These are the actions you will perform in this step to secure your cluster:

  • Remove public SSH access to the master nodes.
  • Deploy the NGINX Ingress Controller for load balancing Kubernetes services.
  • Install Cert-Manager to generate SSL certificates for the services that will be exposed to Internet.

Remove public SSH access to the master nodes

To remove public SSH access to the master nodes, you need to delete the inbound NAT rules of the master's load balancer. Follow the steps below:

  • In the resource group where the cluster is deployed, click on the "k8-master-lb" resource:
Kubernetes master load balancer
  • In the resulting screen, under the "Settings" section, click "Inbound NAT rules":
Inbound NAT rules
  • You will find three inbound NAT rules associated with each master node. Click the right-side menu and select the "Delete" option. Repeat this action for each rule.
Delete inbound NAT rules

From now on, your master nodes are not accessible via SSH from the public Internet.

Deploy the NGINX Ingress Controller

To deploy the NGINX Ingress Controller you can use the Bitnami Helm chart. Follow these instructions:

  • Deploy the NGINX Ingress Controller by using the Bitnami Helm chart. Execute the following commands:

    helm repo add bitnami https://charts.bitnami.com/bitnami
    helm install nginx-ingress bitnami/nginx-ingress-controller \
    --namespace nginx-ingress \
    --set replicaCount=2
    

You should see output similar to this:

Deploy NGINX Ingress Controller
  • Execute the kubectl get pods command as follows to check the resources deployed with the Helm chart:
kubectl get pods -n nginx-ingress

This command will show that the chart deployed three pods - two controllers and one backend:

Deploy NGINX Ingress Controller
  • Execute the kubectl get svc to check the services associated with this new resource - a load balancer and the backend service:

    kubectl get svc -n nginx-ingress
    

    You should see output like this:

Deploy NGINX Ingress Controller
  • Navigate back to the Azure portal to check your resource group. As you can see, a LoadBalancer service type has been created. This is translated in a load balancer created in the Azure Stack and linked to the Kubernetes worker nodes.
NGINX Ingress Controller load balancer created
  • Click on the new load balancer, then under the "Settings" section, click "Backend pools". You will see that the load balancer is connected to the backend pool, which is comprised of two virtual machines that correspond to the Kubernetes worker nodes.
NGINX Ingress Controller load balancer

Install Cert-Manager

Cert-Manager is a native Kubernetes certificate management controller. It helps you generate and manage TLS certificates from multiple sources such as Let's Encrypt. It is a key piece of your Kubernetes infrastructure providing up to date and valid certificates to the applications you will deploy on your cluster. To install Cert-Manager in your cluster, follow the steps below:

  • Install first the CustomResourceDefinition resources. Cert-Manager will use it to configure Certificate Authorities and request certificates. Run the command below:

    kubectl apply --validate=false -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.11/deploy/manifests/00-crds.yaml
    

    This will create the following resources:

Install CustomResourceDefinition resources
  • Create a namespace for Cert-Manager by executing the following:

    kubectl create namespace cert-manager
    
  • Add the Jetstack Helm repository as follows:

    helm repo add jetstack https://charts.jetstack.io
    
  • Update your local Helm chart repository by executing the helm repo update command.

  • Install the official Cert-Manager Helm chart. This example uses version 0.11.0. Check out the latest version of the chart in its GitHub repository.

    helm install cert-manager jetstack/cert-manager \
      --namespace cert-manager \
      --version v0.11.0
    

    Once the installation is done, you will see output similar to this:

Install Cert-Manager

You can also run the kubectl get pods -n cert-manager to check that Cert-Manager has been successfully installed.

To finish the correct installation of Cert-Manager, it is recommended to configure a ClusterIssuer resource in your cluster. It will represent a certificate authority from where signed x509 certificates will be obtained. In this case, we will configure the Let's Encrypt issuer.

  • To configure the ClusterIssuer, create a yaml file that includes the content below.
Tip

Remember to replace USER@EXAMPLE.COM with a valid email address and EXAMPLE-ISSUER-ACCOUNT-KEY with a meaningful name for the secret.

apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
  name: letsencrypt
spec:
  acme:
    # You must replace this email address with your own.
    # Let's Encrypt will use this to contact you about expiring
    # certificates, and issues related to your account.
    email: USER@EXAMPLE.COM
    server: https://acme-v02.api.letsencrypt.org/directory
    privateKeySecretRef:
      # Secret resource used to store the account's private key.
      name: EXAMPLE-ISSUER-ACCOUNT-KEY
    # Add a single challenge solver, HTTP01 using nginx
    solvers:
    - http01:
         ingress:
          class: nginx
  • To apply the configuration defined in the file above, run the following command:

    kubectl apply -f letsencrypt-clusterissuer.yaml
    

    You will see a success notification: clusterissuer.cert-manager.io/letsencrypt created.

Congratulations! Now you have a Kubernetes cluster running on your Azure Stack infrastructure. Check out our tutorials to learn how to deploy applications using Bitnami Helm charts. You can also continue reading the next article in this series to learn how to quickly deploy Bitnami applications on AKS and Azure Stack with Kubeapps.

Useful links

This tutorial is part of the series

Install the AKS Engine on Azure Stack to Deploy Kubernetes Cluster and Start Deploying Applications with Kubeapps

Learn how to install the AKS Engine on Azure stack to deploy a Kubernetes cluster and how to install and use the Kubeapps dashboard to deploy, manage and upgrade Bitnami applications on your cluster.