Simple Kubernetes deployment versioning

Kubernetes

I have been playing around with Kubernetes a bit lately, both at work and for some personal projects. In fact, the page you are reading now is served by a Docker container running on Kubernetes. Kubernetes is a complex product and a bit overkill for a simple website like this, but it gives me the opportunity to learn about its concepts in order to use them for more complex projects. One of the issues I ran into was how to update deployments to a newer image version while using declarative YAML configuration files. In this blog post I will share my solution.

Declarative configuration files are my preferred way to manage "things" in Kubernetes (or Docker, Ansible, or any other tool for that matter). Storing configuration as code and tracking it in the same version control repository as the rest of the code ensures that everybody on the team deploys with the same configuration, with a single command, while allowing the configuration to evolve with the application. This saves time and reduces the risk of making a mistake. And to further automate deployments, the CI server can use the same configuration to automatically deploy the application after a commit is pushed and the build has passed.

In Kubernetes objects can be configured using YAML files. You can store multiple files in a directory and apply them all using kubectl apply -f <directory>. Kubernetes will then create and update objects to match the provided configuration. If an object is already up-to-date with the applied configuration, nothing will change. Unfortunately this makes it a bit difficult to use kubectl apply to deploy future updates of the application. If you use your/image:latest as the image name, your configuration will not change and therefor Kubernetes will not try to update your existing deployment. Besides, using :latest is discouraged anyway because it makes it more difficult to track which version of an image is running or to roll back to an earlier version.

Instead, the recommended way is to give every image a unique tag. But editing and committing (or reverting) the deployment.yaml file for every deployment or release is very cumbersome and error-prone. And manually running kubectl set image to update the image tag is equally impractical and defeats the purpose of having declarative configuration in the first place. So people often recommend using Helm, which is actually a package manager for Kubernetes, but also supports templates for Kubernetes configuration files. But for me it felt like Helm would introduce a lot of extra complexity that wasn't really required for such a simple problem. So instead I decided to create a simpler solution, based on answers from StackOverflow and using tools that are already available on Linux (and can easily be installed on a Mac).

I use environment variables and variable substitution to dynamically pass image tags to Docker and Kubernetes. I'm using the following tools, most of which you probably already have installed on your machine and/or CI server:

Let's start with the docker-compose.yml. I like to use Docker Compose to build images, using its YAML files as declarative configuration for similar benefits as the Kubernetes configuration files. Besides, I use Compose to run my local development environment so much of the configuration is already there.

# docker-compose.yml
version: '3'

services:
  app:
    build: .
    image: your/image:${TAG:-latest}

Docker Compose supports variable substitution out of the box. This means that you can refer to environment variables as ${VARIABLE} from within docker-compose.yml and Compose will replace them with their values. You can also provide a default value that will be used if the variable is unset or empty using ${VARIABLE:-default}. By using latest as the default value, we can ensure that nothing will break if we call docker-compose without setting the $TAG variable.

After building and pushing our image the next step is to deploy it to Kubernetes. There are many different ways to run a container in Kubernetes, but for now I will use a Deployment. I store all YAML configuration in the same directory, called kubernetes:

# kubernetes/deployment.yaml
---
apiVersion: apps/v1
kind: Deployment

metadata:
  name: applicationname

spec:
  replicas: 1

  template:
    spec:
      containers:
        - name: app
          image: your/image:${TAG}

Unlike Docker Compose, kubectl apply doesn't support variable substitution, so we'll do that manually using envsubst in the next step. Because envsubst doesn't support a default value in case the variable is unset or empty, we just use ${TAG} here.

Please note the --- at the top of the file, which indicates the start of a new YAML document. This is important because we will concatenate all YAML files in the kubernetes directory later on, and kubectl needs to know where the configuration for one object ends and the next begins.

The last step, which ties everything together, is the Makefile. I really like using Make and I use it for a lot of other build tasks. However, Make is designed to generate executables and other non-source files from source files and arguably isn't really meant for building and deploying Docker images. I still like to do this in Make simply because it is a pragmatic solution, but this might be a clear case of the Law of the instrument (if all you have is a hammer, everything looks like a nail) so feel free to implement the same logic in a shell script or some other way.

# Makefile
VERSION := $(shell git rev-parse --short HEAD)

dist:
	TAG=${VERSION} docker-compose build

deploy: dist
	TAG=${VERSION} docker-compose push
	cat kubernetes/* | TAG=${VERSION} envsubst '$${TAG}' | kubectl apply -f -

Let's walk through it and see what is happening here:

  1. In the top of my Makefile I'm setting the VERSION Make variable to the output of git rev-parse --short HEAD, which returns the short version of the Git commit hash (for example 63b886d). Alternatively, if you prefer tagging a version before every deployment, you can use something like git describe --tags.
  2. The dist target sets a shell variable TAG with the commit hash of the previous step and calls docker-compose build to build the image. Docker Compose will automatically substitute the variable and tag the image with the correct version tag.
  3. The deploy target first pushes the image to the remote repository, again setting the shell variable so Docker Compose can substitute it. It then reads the contents of the kubernetes directory, substitutes any variables found using envsubst, and pipes the result to kubectl apply.
    I explicitly list the environment variable I want to substitute ('$${TAG}') to prevent envsubst from trying to substitute every dollar sign found in any of the config files. The double dollar sign is to escape the dollar sign in Make.

Here are the same steps implemented in a shell script:

#!/bin/bash

VERSION=$(git rev-parse --short HEAD)

TAG=${VERSION} docker-compose build
TAG=${VERSION} docker-compose push
cat kubernetes/* | TAG=${VERSION} envsubst '${TAG}' | kubectl apply -f -

When we run this Kubernetes will either create the deployment (if it doesn't exist yet) or update an existing deployment, and automatically rollout the new image. If any other changes are made to any of the configuration files in the kubernetes directory, those will be applied as well. This way, we can always deploy our application with the confidence that our configuration matches the version of the application we're deploying.

Could you use some help with Kubernetes in your organization? Have a look at my consulting and training services to see how I can help you.
Share this article on: