Perform Machine-Based Image Recognition with TensorFlow on Kubernetes


If you're interested in machine learning, chances are you've heard about TensorFlow. TensorFlow is an open source software toolkit developed by Google for machine learning research. It has widespread applications for research, education and business and has been used in projects ranging from real-time language translation to identification of promising drug candidates. It's also in use at some of the world's most well-known companies, including Uber, IBM, Intel and Airbnb, to solve complex problems.

Although you can run TensorFlow on a single server, you might (depending on the model you wish to train and the complexity of the computations required) achieve better results by deploying it in a cluster. And that's where Kubernetes comes in. With Kubernetes and Helm, you can simplify and automate the deployment, scaling and management of a TensorFlow implementation in a cluster.

This guide walks you through the process of deploying a popular TensorFlow implementation, TensorFlow Serving with the TensorFlow Inception model, into a Kubernetes cluster running on Minikube or Google Container Engine (GKE) using Bitnami's official Docker image and Helm chart.

Assumptions and prerequisites

Here are the steps you'll follow in this guide:

  • Step 1: Get and test the Bitnami Docker image for TensorFlow Inception
  • Step 2: Deploy TensorFlow Inception in Kubernetes using Helm
  • Step 3: Test the TensorFlow Inception model with your own images

Step 1: Get and test the Bitnami Docker image for TensorFlow Inception

Bitnami offers Docker images for TensorFlow Serving and the TensorFlow Inception model, which is a model for machine-based image recognition. These Docker images make it easy to get started immediately with TensorFlow.

To begin, get the Docker Compose file for TensorFlow Inception from the Bitnami repository:

git clone
cd bitnami-docker-tensorflow-inception

Review the Docker Compose file included in the Bitnami TensorFlow Inception repository, and you will see that it uses Bitnami's TensorFlow Serving and TensorFlow Inception model images. You will notice also that the TensorFlow Inception model image doesn't include the training data needed for the model. This is deliberately done to reduce the size of the image, so you will need to separately download this training data before you can use the TensorFlow Inception model. The Docker Compose file will take care of mounting the training data as a volume in the Bitnami Docker container.

Run the following commands to download and extract the training data to the /tmp folder on the Docker host:

mkdir /tmp/model-data
curl -o '/tmp/model-data/inception-v3-2016-03-01.tar.gz' ''
cd /tmp/model-data/
tar xvf inception-v3-2016-03-01.tar.gz

Change back to the directory containing the Docker image and start the TensorFlow Inception containers with Docker Compose:

cd bitnami-docker-tensorflow-inception
docker-compose up

Then, test it by sending it an image file and checking the output using the command below. The TensorFlow Inception Docker image includes some sample image files you can use (a brief explanation of how you can use your own images is included later in this tutorial).

docker-compose exec tensorflow-inception inception_client --server=tensorflow-serving:9000 --image=/opt/bitnami/tensorflow-inception/tensorflow/tensorflow/contrib/pi_examples/label_image/data/grace_hopper.jpg

See the image referenced in the command above, and then see what TensorFlow Inception has to say about it:

Query output using a sample image

Step 2: Deploy TensorFlow Inception in Kubernetes using Helm


Before performing the following steps, make sure you have a Kubernetes cluster running with Helm v3.x installed correctly. For detailed instructions, refer to our starter tutorial.

Now that you know the Docker images for the server and client work properly, the next step is to use a Helm chart to deploy the Bitnami Docker image for TensorFlow Serving to a Kubernetes cluster.

For this tutorial, we'll provide you with a ready-made Helm chart, which you can obtain from the Bitnami GitHub repository as follows:

git clone
cd charts/incubator/tensorflow-inception

Here is what you should see in the directory:

+-- Chart.yaml
+-- templates
|   +-- deployment.yaml
|   +-- _helpers.tpl
|   +-- inception-job.yaml
|   +-- inception-pvc.yaml
|   +-- NOTES.txt
|   +-- svc.yaml
|   +-- pvc.yaml
+-- values.yaml

Here's a brief explanation of the most important components of this Helm chart:

  • Chart.yaml: This file contains the metadata of the Helm chart, such as the version or the description.
  • values.yaml: This file declares the variables to be passed to the templates.
  • templates/svc.yaml: This file defines a Kubernetes Service for the TensorFlow Serving server. Learn more about Services.
  • templates/inception-job.yaml: This file defines a Kubernetes Job. The Job, which runs once, takes care of downloading the Inception model data and exporting it to the correct format for use by TensorFlow Serving. Learn more about Jobs.
  • templates/pvc.yaml: This file defines a Kubernetes PersistentVolumeClaim (PVC). The corresponding PersistentVolume is used to store the configuration of the TensorFlow Serving server and make it persistent. Learn more about Persistent Volumes.
  • templates/inception-pvc.yaml: This file also defines a Kubernetes PVC. The corresponding PersistentVolume is populated with the Inception model's training data and mounted in the pods created by the deployment. Learn more about Persistent Volumes.
  • templates/deployment.yaml: This manifest file creates a service endpoint for the container. Learn more about Deployments.

For a more in-depth look at chart creation, we recommend reading our Helm Chart tutorial.

At this point, you have the Bitnami Docker images for TensorFlow Serving and the TensorFlow Inception model, and a custom Helm chart that deploys it. To deploy these images to your Kubernetes cluster using the Helm chart, follow these steps:

  • Make sure that you can connect to your Kubernetes cluster by executing the command below:

    kubectl cluster-info
  • Deploy the Helm chart by executing the following from the directory above the one containing the Helm chart:

    helm install tensorflow-inception

    If you're using Minikube, you can execute this command instead:

    helm install tensorflow-inception --set serviceType=NodePort

    Here's what you should see after the chart is deployed:

TensorFlow Inception deployment using a Helm chart
  • Obtain the cluster IP address and port for the running service using the commands shown in the post-install output. These commands will store the cluster IP address and port in the $APP_HOST and $APP_PORT environment variables. Note that you may need to wait a few minutes for the IP address and port to become available.

Alternatively, you can use the kubectl get svc or minikube ip commands to obtain the cluster IP address and port, and manually assign them to the $APP_HOST and $APP_PORT environment variables. Here's an example of the output when using kubectl get svc. Note that this image reflects the default LoadBalancer deployment. If you're performing a NodePort deployment, you will see different output (in particular, the port will be assigned randomly).

Service endpoints

Step 3: Test the TensorFlow Inception model with your own images

Once the chart has been installed, test the model by executing a query against the TensorFlow Serving server running in Kubernetes. Since you've already set up TensorFlow Serving with the Inception model, the best way to do this is by sending it an image and checking the response to see if the image was correctly identified.

Here's an example of testing the model using the same sample image as before:

docker run --rm -it bitnami/tensorflow-inception inception_client --server=$APP_HOST:$APP_PORT --image=/opt/bitnami/tensorflow-inception/tensorflow/tensorflow/contrib/pi_examples/label_image/data/grace_hopper.jpg

Remember to ensure that the $APP_HOST and $APP_PORT variables are set to the cluster IP address and port respectively before running the previous command. Here's what you should see:

TensorFlow Inception query results using a sample image

Obviously, you'll want to use your own images as well, and it's quite easy to do this too. For example, if you have your images at /tmp/images, run the command below to have TensorFlow Inception identify one of them:

docker run --rm -it -v /tmp/images/:/user-images/ bitnami/tensorflow-inception inception_client --server=$APP_HOST:$APP_PORT  --image=/user-images/image.jpg

Remember to replace the image and directory paths with values that reflect your environment and ensure that the $APP_HOST and $APP_PORT variables are set to the cluster IP address and port respectively.

Here's a sample of what TensorFlow Inception returns when presented with an image of Angkor Wat in Cambodia:

TensorFlow Inception query results using a custom image

Once you've got it all working correctly, remember that since TensorFlow Serving is now running on Kubernetes, you can easily increase or decrease the number of pods, perform rolling updates or balance traffic between pods (and you can read more about how to perform these tasks in our Helm tutorial). In short, you've got everything you need to begin building scalable, secure applications using TensorFlow. So, what are you waiting for?

Useful links

To learn more about the topics discussed in this guide, use the links below: