Over the past few months, Bitnami have been working with non-root containers. We realized that non-root images adds an extra layer of security to the containers. Therefore, we decided to release a selected subset of our containers as non-root images so that our users could benefit from them.
In this blog post we see how a Bitnami non-root Dockerfile looks like by checking the Bitnami Nginx Docker image. As an example of how the non-root containers can be used, we go through how to deploy Ghost on Openshift. Finally, we will cover some of the issues we faced while moving all of these containers to non-root containers
What are non-root containers?
By default, Docker containers are run as
root users. This means that you can do whatever you want in your container, such as install system packages, edit configuration files, bind privilege ports, adjust permissions, create system users and groups, access networking information.
With a non-root container you can't do any of this . A non-root container should be configured for its main purpose, for example, run the Nginx server.
Why use a non-root container?
Mainly because it is a best practise for security. If there is a container engine security issue, running the container as an unprivileged user will prevent the malicious code from scaling permissions on the host node. To learn more about Docker's security features, see Bitnami Nginx Docker image. As an example of how the non-root containers can be used, we go through how to deploy Ghost on this guide.
Another reason for using non-root containers is because some Kubernetes distributions force you to use them. For example Openshift, a Red Hat Kubernetes distribution. This platform runs whichever container you want with a random UUID, so unless the Docker image is prepared to work as a non-root user, it probably won't work due to permissions issues. The Bitnami Docker images that have been migrated to non-root containers works out-of-the-box on Openshift.
How to create a non-root container?
To explain how to build a non-root container image, we will use our Nginx non-root container and its Bitnami Nginx Docker image. As an example of how the non-root containers can be used, we go through how to deploy Ghost on Dockerfile.
FROM bitnami/minideb-extras:jessie-r22 LABEL maintainer "Bitnami <email@example.com>" ENV BITNAMI_PKG_CHMOD="-R g+rwX" ... RUN bitnami-pkg unpack nginx-1.12.2-0 --checksum cb54ea083954cddbd3d9a93eeae0b81247176235c966a7b5e70abc3c944d4339 ... USER 1001 ENTRYPOINT ["/app-entrypoint.sh"] CMD ["nginx","-g","daemon off;"]
- The BITNAMI_PKG_CHMOD env var is used to define file permissions for the folders where we want to write, read or execute. The
bitnami-pkgscript reads this env var and performs the changes.
bitnami-pkg unpack nginxunpacks the Nginx files and changes the permissions as stated by the BITNAMI_PKG_CHMOD env var.
Up until this point, everything is running as the
- Later, the
USER 1001directive switches the user from the default
1001. Although we specify the user
1001, keep in mind that this is not a special user. It might just be whatever UUID that doesn't match an existing user in the image. Moreover, Openshift ignores the USER directive of the Dockerfile and launches the container with a random UUID.
Because of this, the non-root images cannot have configuration specific to the user running the container. From this point to the end of the Dockerfile, everything is run by the
- Finally, the entrypoint is in charge of configure Nginx. It is worth mentioning that no
www-dataor similar user is created as the Nginx process will be running as the
1001user. Also, because we are running the Nginx service as an unprivileged user we cannot bind the port 80, therefore we must configure the port 8080.
How to deploy Ghost in OpenShift
For simplicity we will use Minishift, a tool that helps you run OpenShift locally.
- Start the cluster and load the Openshift Client environment.
minishift start eval $(minishift oc-env)
- Deploy both MariaDB and Ghost images:
oc new-app --name=mariadb ALLOW_EMPTY_PASSWORD=yes --docker-image=bitnami/mariadb oc new-app --name=ghost --docker-image=bitnami/ghost
- Finally expose the Ghost service and access the URL:
oc expose svc/ghost oc status
At this point, launch the Minishift dashboard with the following command, check the Ghost logs, and access the application:
The logs from the Ghost container show that it has been successfully initialized:
Access to the Ghost application by clicking the service URL. You can find it in the top-right corner in the first screenshot.
Lessons learned: issues and troubleshooting
All that glitters is not gold. Non-root containers have some disadvantages. Below are some issues we've run into as well as their possible solutions.
Look and feel
When you execute to the container, the prompt looks strange because the user does not exist.
I have no name!@a0e5d8399c5b:/$
Debugging issues on non-root containers could be tricky. Installing system packages such as a text editor or executing network utilities is not allowed as we don't have enough permissions.
As a workaround, it is possible to edit the Dockerfile to install a system package. Or, we can start the container as the
root user using the
--user root flag for Docker or the
user: root directive for
Other issues arises when you try to mount a folder from your host. As Docker mounts the host volume preserving UUID and GUID from the host, permission issues in the Docker volume are possible. The user running the container may not have the appropriate privileges to write in the volume.
Possible solutions are running the container with the same UUID and GUID as the host or change the permissions of the host folder before mounting it to the container.
Volumes in Kubernetes
Data persistence is configured using persistent volumes. Due to the fact that Kubernetes mounts these volumes with the root user as the owner, the non-root containers don't have permissions to write to the persistent directory.
The following are some things we can do to solve these permission issues:
- Use an init-container to change the permissions of the volume before mounting it in the non-root container. Example:
spec: initContainers: - name: volume-permissions image: busybox command: ['sh', '-c', 'chmod -R g+rwX /bitnami'] volumeMounts: - mountPath: /bitnami name: nginx-data containers: - image: bitnami/nginx:latest name: nginx volumeMounts: - mountPath: /bitnami name: nginx-data
- Use Pod Security Policies to specify the user ID and the FSGroup that will own the pod volumes. (Recommended)
spec: securityContext: runAsUser: 1001 fsGroup: 1001 containers: - image: bitnami/nginx:latest name: nginx volumeMounts: - mountPath: /bitnami name: nginx-data
Config Maps in Kubernetes
This is a very similar issue to the previous one. Mounting a config-map to a non-root container creates the file path with root permissions. Therefore, if the container tries to write something else in that path, it will result in a permissions error. The Pod Security Policies doesn't seem to work for configMaps so we will have to use an init-container to fix the permissions if necessary.
Issues with specific utilities or servers
Some utilities or servers may run some user checks and try to find the user in the /etc/passwd file.
For example, Git required to run commands as an existing user until version 2.6.5+. Otherwise, it complains about it:
git clone https://github.com/tompizmor/charts Cloning into 'charts'... remote: Counting objects: 7, done. remote: Total 7 (delta 0), reused 0 (delta 0), pack-reused 7 Unpacking objects: 100% (7/7), done. Checking connectivity... done. fatal: unable to look up current user in the passwd file: no such user
Another example of a server that has this issue is Zookeeper. We can see in the startup process that Zookeeper is unable to determine the user name or the user home. However, this issue is harmless as Zookeeper runs perfectly after that.
zookeeper_1 | 2017-10-19 09:55:16,405 [myid:] - INFO [main:Environment@100] - Server environment:os.name=Linux zookeeper_1 | 2017-10-19 09:55:16,405 [myid:] - INFO [main:Environment@100] - Server environment:os.arch=amd64 zookeeper_1 | 2017-10-19 09:55:16,405 [myid:] - INFO [main:Environment@100] - Server environment:os.version=4.4.0-93-generic zookeeper_1 | 2017-10-19 09:55:16,405 [myid:] - INFO [main:Environment@100] - Server environment:user.name=? zookeeper_1 | 2017-10-19 09:55:16,405 [myid:] - INFO [main:Environment@100] - Server environment:user.home=? zookeeper_1 | 2017-10-19 09:55:16,405 [myid:] - INFO [main:Environment@100] - Server environment:user.dir=/
As we can see above, Zookeeper is unable to determine the user name or the user home.
Non-root containers' lights and shadows
We have seen that building a non-root Docker image is easy and can be a lifesaver in case of a security issue. Running them in an Openshift platform is also straightforward. These are good reasons to start using non-root containers more frequently.
However, besides the previous advantages, we also mentioned a set of drawbacks that we should take into account before moving to a non-root approach, especially regarding file permissions.
To go through the features and issues yourself, take a look at one of the following Bitnami non-root containers.
Also, if you are interested in non-root containers and Kubernetes security, I encourage you to take a look at the following articles articles:
In this tutorial
- What are non-root containers?
- Why use a non-root container?
- How to create a non-root container?
- How to deploy Ghost in OpenShift
- Lessons learned: issues and troubleshooting
- Look and feel
- Debugging Experience
- Mounted Volumes
- Volumes in Kubernetes
- Config Maps in Kubernetes
- Issues with specific utilities or servers
- Non-root containers' lights and shadows