Create a Continuous Integration Pipeline with Jenkins and Google Kubernetes Engine
Introduction
Jenkins is a popular open source tool for build automation, making it quick and efficient to build, test and roll out new applications in a variety of environments. One of those environments is Kubernetes, available as a service from all the leading cloud providers. A number of plugins are now available to connect Jenkins with Kubernetes and use it as a deployment target for automated builds.
If you're a Jenkins user interested in migrating your deployment infrastructure to Kubernetes, this guide gets you started. It uses Bitnami application stacks and container images to ease the task of building an automated, enterprise-ready CI/CD pipeline using Jenkins, Kubernetes, Docker Hub and GitHub.
Bitnami's Jenkins stack lets you deploy a secure Jenkins instance on the cloud, pre-configured with common plugins for SCM integration and pipeline creation.
Bitnami's infrastructure containers for Node.js, Ruby, Java and others makes it easy to containerize your applications in a secure and reliable manner.
This guide shows you how to set up a CI/CD pipeline between a GitHub source code repository, Jenkins (deployed using the Bitnami Jenkins stack), Docker Hub, and one or more Kubernetes cluster running on Google Kubernetes Engine (GKE).
This pipeline, once configured, will trigger a new build every time it detects a change to the code in the GitHub repository. The code will be built as a Docker container (based on a Bitnami Node.js base container) and pushed to the Docker Hub container registry. The published container will then be deployed on a Kubernetes cluster for review and testing. Based on test results, the user can optionally choose to deploy to a production cluster.
Assumptions and prerequisites
This guide makes the following assumptions:
- You have deployed the Bitnami Jenkins stack on a cloud server and have the Jenkins administrator credentials. Learn about deploying Bitnami applications and obtaining credentials.
- You have an active Google Cloud Platform project and administrator credentials for that project. Learn about creating and managing Google Cloud Platform projects and using Google Cloud IAM.
- You have at least one multi-node Kubernetes cluster running on Google Kubernetes Engine (GKE). Learn about deploying a Kubernetes cluster on GKE.
- You have a GitHub account. If not, sign up for a free GitHub account.
- You have a Docker Hub account. If not, sign up for a free Docker Hub account.
Step 1: Prepare Jenkins
The first step is to prepare Jenkins to work with Docker and Kubernetes. This involves installing Docker and Kubernetes command-line tools on the Jenkins cloud server and installing the Jenkins GKE plugin. Follow the steps below:
Follow the instructions to install Docker.
Add the user running Jenkins to connect to the Docker daemon by adding it to the docker group:
sudo usermod -aG docker tomcat sudo /opt/bitnami/ctlscript.sh restart
TipBy adding the user tomcat to the group docker, you are granting it superuser rights. This is a security risk and no mitigation is provided in this guide. You must separately address this risk in production systems. Please be aware of the risks before proceeding.
Install the kubectl command-line tool:
sudo apt-get update sudo apt-get install kubectl
This is also a good time to install the necessary Jenkins plugins, as follows:
- Log in to Jenkins using administrator credentials.
- If this is your first log-in, you will be prompted to install plugins. This is an optional step: you may install the set of suggested plugins or choose a specific set of plugins to install.

- Confirm the Jenkins server URL and wait for Jenkins to install the selected plugins. In some cases, Jenkins may prompt you to restart and log in again.
- Navigate to the "Manage Jenkins -> Manage Plugins" page.
- On the resulting page, select the "Installed" tab and confirm that the "Docker Pipeline" and "GitHub" plugins are installed (these are included by default with the Bitnami Jenkins stack).

- Select the "Available" tab and select the "Google Kubernetes Engine" plugin. Click the "Install without restart" button. Wait for the plugin to be installed and become available for use.

Step 2: Configure a Google Cloud Platform service account
Jenkins will interact with the Kubernetes cluster using a Google Cloud Platform service account. Your next step is to set up this service account and give it API access. Follow these steps:
- Log in to Google Cloud Platform and select your project.
- Navigate to the "IAM & admin -> Service accounts" page and create a new service account. Learn more about creating service accounts in the Google Cloud Platform documentation.
- Create a new JSON key for the service account. Download and save this key, as it will be needed by Jenkins.

The next step is to enable the APIs needed by the Jenkins GKE plugin. Follow these steps:
- Navigate to the "APIs & services -> Library" page.
- Search for and enable each of the following APIs:
- After enabling each API, click the "Manage" button on the API detail page and confirm that the service account created previously has access to the API (or add access if required).
- Navigate to the "IAM & admin & IAM" page.
- Click the "Add" button. Select the service account created in the previous step and assign it the "Kubernetes Engine Admin" role.

- Save the changes.
Step 3: Create and enable a GitHub repository webhook
Next, create a GitHub repository and configure it such that GitHub automatically notifies Jenkins about new commits to the repository via a webhook.
- Log in to GitHub and create a new repository. Note the HTTPS URL to the repository.

- Click the "Settings" tab at the top of the repository page.
- Select the "Webhooks" sub-menu item. Click "Add webhook".
- In the "Payload URL" field, enter the URL http://IP-ADDRESS/jenkins/github-webhook/, replacing the IP-ADDRESS placeholder with the IP address if your Jenkins deployment.

- Ensure that "Just the push event" radio button is checked and save the webhook.
Remember that you already installed the GitHub plugin as part of the Jenkins preparation in Step 1, so you don't need to configure anything further in Jenkins to enable this webhook. Learn more about the Jenkins GitHub plugin.
The repository will be empty at this point but don't worry, you'll be adding some code to it very soon!
Step 4: Create a Jenkins pipeline project
At this point, you are ready to start setting up your Jenkins pipeline. Follow the steps below to create a new project.
- Log in to Jenkins (if you're not already logged in).
- Click "New item". Enter a name for the new project and set the project type to "Pipeline". Click "OK" to proceed.

- Select the "General" tab on the project configuration page and check the "GitHub project" checkbox. Enter the complete URL to your GitHub project.
- Select the "Build triggers" tab on the project configuration page and check the "GitHub hook trigger for GITScm polling" checkbox.

- Select the "Pipeline" tab on the project configuration page and set the "Definition" field to "Pipeline script from SCM". Set the "SCM" field to "Git" and enter the GitHub repository URL. Set the branch specifier to "*/master". This configuration tells Jenkins to look for a pipeline script named Jenkinsfile in the code repository itself.

- Save the changes.
At this point, your pipeline project has been created, but doesn't actually have any credentials or data to operate on. The next steps will add this information.
Step 5: Add credentials to Jenkins
This step will walk you through adding credentials to the Jenkins credentials store, so that the pipeline is able to communicate with the Kubernetes cluster and the Docker Hub registry. Follow the steps below:
- Navigate to the Jenkins dashboard and select the "Credentials" menu item.
- Select the "System" sub-menu" and the "Global credentials" domain.
- Click the "Add credentials" link. Select the "Username with password" credential type and enter your Docker Hub username and password in the corresponding fields. Set the "ID" field to dockerhub. Click "OK" to save the changes.

- Click the "Add credentials" link. Select the "Google Service Account from private key" credential type and set the project name (which doubles as the credential identifier) to gke. Select the "JSON key" radio button and upload the JSON key obtained in Step 2. Click "OK" to save the changes.

With the credentials added, you're now ready to commit some code and test the pipeline.
Step 6: Write code
At this point, you are ready to add some code to the project. This section will create a simple "Hello, world" application in Node.js and then configure a Dockerfile to run it with Bitnami's Node.js development container image.
Follow these steps:
Create a working directory for the application on your local host:
mkdir myproject cd myproject
Create a package.json file listing the dependencies for the project:
{ "name": "simple-node-app", "version": "1.0.0", "description": "Node.js on Docker", "main": "server.js", "scripts": { "start": "node server.js" }, "dependencies": { "express": "^4.13" } }
Create a server.js file for the Express application which returns a "Hello world" message on access:
'use strict'; const express = require('express'); // Constants const PORT = process.env.PORT || 3000; // App const app = express(); app.get('/', function (req, res) { res.send('Hello world\n'); }); app.listen(PORT); console.log('Running on http://localhost:' + PORT);
Create a Dockerfile with the following content:
FROM bitnami/node:9 as builder ENV NODE_ENV="production" # Copy app's source code to the /app directory COPY . /app # The application's directory will be the working directory WORKDIR /app # Install Node.js dependencies defined in '/app/packages.json' RUN npm install FROM bitnami/node:9-prod ENV NODE_ENV="production" COPY --from=builder /app /app WORKDIR /app ENV PORT 5000 EXPOSE 5000 # Start the application CMD ["npm", "start"]
This multi-stage Dockerfile creates a new image using Bitnami's Node.js container image as base. It copies the application files to the container's /app directory and then runs npm install to install Express. It then creates a production-ready container image and configures the application to listen to request on port 5000.
Create a deployment.yaml file in the repository which defines how the built container should be deployed on Kubernetes. Replace the DOCKER-HUB-USERNAME in the definition below with your Docker Hub username.
apiVersion: apps/v1 kind: Deployment metadata: name: hello labels: app: hello spec: strategy: type: Recreate template: metadata: labels: app: hello tier: hello spec: containers: - name: hello image: DOCKER-HUB-USERNAME/hello:latest imagePullPolicy: Always ports: - containerPort: 5000 name: hello --- apiVersion: v1 kind: Service metadata: name: hello labels: app: hello spec: ports: - port: 5000 targetPort: 5000 selector: app: hello tier: hello type: LoadBalancer
This definition pulls the built container from Docker Hub and creates a new deployment with it in your Kubernetes cluster. It also creates a LoadBalancer service so that the deployment can be accessed from outside the cluster.
WarningGoogle Kubernetes Engine will automatically assign a static IP address to the load balancer with the configuration shown above. You will incur additional charges for this static IP address.
Finally, create a pipeline script named Jenkinsfile with the content below. This is the script Jenkins will use to build and deploy your application.
This guide discusses two pipeline scenarios:
The first is a typical test or development scenario which requires a single Kubernetes cluster. Here, Jenkins will build and deploy the application on a single Kubernetes cluster. Users can then access the application to see the latest changes or perform integration testing.
The second is a more complex scenario involving two Kubernetes cluster, a development/test cluster and a production cluster. Here, Jenkins will initially build and deploy the application on the first development/test cluster for review and integration testing. It will then wait for user input and, based on that input, it will deploy the application on the production cluster.
Scenario 1: Deployment to a single cluster
The pipeline script shown below covers the first scenario. Replace the PROJECT-ID, CLUSTER-NAME, CLUSTER-LOCATION and DOCKER-HUB-USERNAME placeholders in the script below with your Google Compute Project project identifier, Kubernetes cluster name, Kubernetes cluster location and Docker Hub username respectively.
pipeline { agent any environment { PROJECT_ID = 'PROJECT-ID' CLUSTER_NAME = 'CLUSTER-NAME' LOCATION = 'CLUSTER-LOCATION' CREDENTIALS_ID = 'gke' } stages { stage("Checkout code") { steps { checkout scm } } stage("Build image") { steps { script { myapp = docker.build("DOCKER-HUB-USERNAME/hello:${env.BUILD_ID}") } } } stage("Push image") { steps { script { docker.withRegistry('https://registry.hub.docker.com', 'dockerhub') { myapp.push("latest") myapp.push("${env.BUILD_ID}") } } } } stage('Deploy to GKE') { steps{ sh "sed -i 's/hello:latest/hello:${env.BUILD_ID}/g' deployment.yaml" step([$class: 'KubernetesEngineBuilder', projectId: env.PROJECT_ID, clusterName: env.CLUSTER_NAME, location: env.LOCATION, manifestPattern: 'deployment.yaml', credentialsId: env.CREDENTIALS_ID, verifyDeployments: true]) } } } }
This script defines some environment variables related to the Kubernetes cluster and a four-stage pipeline, as follows:
- The first stage checks out the code from GitHub.
- The second stage uses the docker.build command to build the application using the supplied Dockerfile. The built container is tagged with the build ID.
- The third stage pushes the built container to Docker Hub using the dockerhub credential created in Step 5. The pushed container is tagged with both the build ID and the latest tag in the registry.
- The fourth stage uses the GKE plugin to deploy the application to the Kubernetes cluster using the deployment.yaml file.
TipIt is worth pointing out here that the GKE plugin internally uses kubectl and kubectl will only trigger a redeployment if the deployment.yaml file changes. Therefore, the first step of the the fourth stage uses sed to manually modify Jenkins' local copy of the deployment.yaml file (by updating the container's build ID) so as to trigger a new deployment with the updated container.
Scenario 2: Deployment to two clusters
If you wish to deploy to two clusters, use this version of the pipeline script instead. Replace the placeholders as before, noting that in this version, the CLUSTER-NAME-1 and CLUSTER-NAME-2 placeholders should reflect the names of your test and production Kubernetes clusters respectively.
pipeline { agent any environment { PROJECT_ID = 'PROJECT-ID' LOCATION = 'CLUSTER-LOCATION' CREDENTIALS_ID = 'gke' CLUSTER_NAME_TEST = 'CLUSTER-NAME-1' CLUSTER_NAME_PROD = 'CLUSTER-NAME-2' } stages { stage("Checkout code") { steps { checkout scm } } stage("Build image") { steps { script { myapp = docker.build("DOCKER-HUB-USERNAME/hello:${env.BUILD_ID}") } } } stage("Push image") { steps { script { docker.withRegistry('https://registry.hub.docker.com', 'dockerhub') { myapp.push("latest") myapp.push("${env.BUILD_ID}") } } } } stage('Deploy to GKE test cluster') { steps{ sh "sed -i 's/hello:latest/hello:${env.BUILD_ID}/g' deployment.yaml" step([$class: 'KubernetesEngineBuilder', projectId: env.PROJECT_ID, clusterName: env.CLUSTER_NAME_TEST, location: env.LOCATION, manifestPattern: 'deployment.yaml', credentialsId: env.CREDENTIALS_ID, verifyDeployments: true]) } } stage('Deploy to GKE production cluster') { steps{ input message:"Proceed with final deployment?" step([$class: 'KubernetesEngineBuilder', projectId: env.PROJECT_ID, clusterName: env.CLUSTER_NAME_PROD, location: env.LOCATION, manifestPattern: 'deployment.yaml', credentialsId: env.CREDENTIALS_ID, verifyDeployments: true]) } } } }
With this version, the four-stage pipeline is enhanced to five stages, with the final stage (production cluster deployment) dependent on the user's approval.
Step 7: Commit, test and repeat
At this point, you are ready to have Jenkins build and deploy your application. Since you already configured a GitHub webhook trigger in Step 3, committing your code to GitHub will automatically trigger the pipeline.
Initialize a Git repository in your working directory and commit and push the application code to GitHub. Replace the NAME and EMAIL-ADDRESS placeholders with your name and email address (if not already configured) and the CLONE-URL placeholder with the GitHub repository clone URL obtained in Step 3.
git config --global user.name "NAME"
git config --global user.name "EMAIL-ADDRESS"
git init
git remote add origin CLONE-URL
git add .
git commit -m "Initial commit"
git push origin master
Pushing this commit should automatically trigger the pipeline in Jenkins. To see the pipeline in action, navigate to the project page in Jenkins and confirm that the pipeline is running, as shown below:

In some cases, Jenkins may not properly respond to an incoming webhook for the first run of the pipeline. If the pipeline is not automatically triggered on its first run, manually trigger it by clicking "Build Now" on the Jenkins project page. Subsequent runs of the pipeline should occur automatically.
Use the kubectl get deployments and kubectl get services commands to check the status of your deployment on the Kubernetes cluster and obtain the load balancer IP address, as shown below:
Browse to port 5000 of the load balancer IP address and you should see the output of the Node.js application, as shown below:
If you are using the five-stage pipeline, you will notice that Jenkins pauses the job after deploying to the first cluster and displays the following message:

When you click "Proceed", the final stage of the pipeline will run and the application will be deployed to your second cluster.
To test the CI/CD feature, make a change to the application - for example, update the message "Hello world" in the server.js file to "Aloha world" - and push the change to GitHub.
sed -i 's/Hello world/Aloha world/g' server.js
git add .
git commit -m "Modified message text"
git push origin master
The new commit should trigger the pipeline, causing a new build-publish-deploy sequence, and the rebuilt container will be deployed on your cluster for review. As before, check pipeline status in Jenkins, wait for it to complete and then browse to port 5000 of the load balancer IP address as before. You should see the revised output, as shown below:
At this point, you have successfully created a simple CI/CD pipeline between Jenkins, GitHub, Docker Hub and one or more Kubernetes clusters running on Google Kubernetes Engine. You can now extend and enhance it with multiple branches, test execution and recording and notifications.
Useful links
To learn more about the topics discussed in this guide, use the links below:
- Bitnami Jenkins stack documentation
- Bitnami Node.js container
- Bitnami applications FAQ for Google Cloud Platform
- Bitnami documentation for Kubernetes deployments on Google Kubernetes Engine
- Jenkins documentation
- Jenkins GitHub plugin documentation
- Jenkins Pipeline documentation
- Jenkins GKE plugin documentation
- Google Cloud Platform project documentation
- Google Cloud Platform IAM documentation
In this tutorial
- Introduction
- Assumptions and prerequisites
- Step 1: Prepare Jenkins
- Step 2: Configure a Google Cloud Platform service account
- Step 3: Create and enable a GitHub repository webhook
- Step 4: Create a Jenkins pipeline project
- Step 5: Add credentials to Jenkins
- Step 6: Write code
- Step 7: Commit, test and repeat
- Useful links