Enforcing service privacy on Kubernetes

Bitnami is charging ahead with our internal Kubernetes adoption, it's the platform we're deploying all of our new production services on. While deploying public production services on Kubernetes is great, the software development lifecycle phases that precede production should enable application deployments that aren't public. In this article we'll walk through how events in the Kubernetes system can be leveraged to enforce that.

Not Everything Should Be Public

Kubernetes enables developers to perform self-service installation of their applications to a cluster with exceeding ease. Given a deployment yaml file, wow-nginx.yml, describing the pods and replica sets for your application, like this:

# Kubernetes v1.5.x
apiVersion: apps/v1
kind: Deployment
metadata:
  # Unique key of the Deployment instance
  name: oh-yea-nginx
spec:
  # Two pods should exist at all times.
  replicas: 2
  template:
    metadata:
      labels:
        # Apply this label to pods and default
        # the Deployment label selector to this value
        app: nginx
    spec:
      containers:
      - name: nginx
        # Run this image
        image: nginx:1.10
---
kind: Service
apiVersion: v1
metadata:
  # Unique key of the Service instance
  name: yay-nginx
spec:
  ports:
    # Accept traffic sent to port 80
    - name: http
      port: 80
      targetPort: 80
  selector:
    # Load balance traffic across Pods matching
    # this label selector
    app: nginx
  type: LoadBalancer

simply type

$ kubectl apply -f wow-nginx.yml

and your app is running. That was easy. However, for applications that should not expose an external load balancer we have a problem. For instance, clusters that are only intended for pre-production workloads or for hosting services that should only reachable by users who have access to our AWS VPC’s there’s nothing preventing a service from being made public.

Technical Specifics

We’ve previously shared how Bitnami provisions Kubernetes clusters on AWS. With Kubernetes infrastructure running on AWS, the default behavior of a LoadBalancer service is to launch a public AWS Elastic Load Balancer (ELB). The ELB service provides layer 4 load balancing and SSL termination. Beyond AWS, each cloud has its own implementation for load balancing but the default is to expose a load balanced service publicly. To keep the service exposure internal to our VPC, our service definition must include an additional annotation in the metadata like this:

---
kind: Service
apiVersion: v1
metadata:
  # Unique key of the Service instance
  name: yay-nginx
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0

There are other methods of restricting access to the load balancer such as using the loadBalancerSourceRanges directive. However, for clusters that are hosting workloads that should never be public (e.g. development and continuous integration deployments), we were reliant on developers remembering to put the required incantation in their service definition. What we wanted was a way to administratively assure that those services were not publicly exposed.

We tried modifying the network ACL’s to provide a firewall blocking inbound access to load balancers created with public IP addresses. However, we couldn’t use that because of the stateless nature VPC firewalls on AWS; they also blocked outbound connections from the VPC (e.g. to nodes accessing container image registries) from returning their packets. So we effectively had no way to administratively prevent applications from exposing their services through a public load balancer.

We considered implementing an admission controller configuration that rejected the creation of public services but felt that that posed other manageability challenges. It would be preferable if the default behavior of service load balancers were a cluster-level configuration option. As Kubernetes is a rapidly evolving platform, we’re hoping to see this become a feature in future releases.

Fortunately, deployment changes on the cluster infrastructure are available through the Kubernetes API with a function that watches events on the cluster. We decided to leverage that instead of investing further consideration concerning cluster level configuration options. This is how it works: when a service is deployed to the Kubernetes cluster that we’re administratively enforcing private services on, our watcher inspects its metadata. If the requisite annotation is absent, the service is terminated and a message is sent to a channel in our corporate Slack instance.

The service watcher effectively enforces that the only load balanced services permitted to run on the cluster are using internal load balancers. If a developer’s service isn’t working because it tried to use a public load balancer, the presences of the notification on Slack provides transparency as to why that might be. The source code for this implementation is available on GitHub, please let us know if you find this useful too!

Future Consideration

As we’ve adopted using an ingress controller on the cluster for layer 7 request routing, our web services should be using that instead of allocating their own load balancers. Unfortunately, there’s no cluster-level configuration parameter to enforce that web services use the ingress controller on the cluster. We may consider modifying our cluster watcher to enforce that services cannot create their own load balancers for ports 80 and 443 under any circumstances; they would be required to use the ingress controller. That has implications on how SSL termination is implemented but we’ll save that for a topic of a future blog post, stay tuned!

Want to reach the next level in Kubernetes?