In my previous article, I described a quick and easy way to get a Kubernetes cluster up and running on your local machine. Hopefully by now, you’ve had a chance to try this out, created and deployed a few Docker containers, and are now convinced that containers are the way to go!
You probably noticed that the whole process of creating and deploying containers adds a few more steps to your development workflow. The good news is that we can automate these steps by setting up a Continuous Delivery pipeline, allowing us to build, test and run (in production) our applications using a single codebase and minimal intervention.
Introduction
If you’re about to start the development of a new application or move an existing one to a container-based hosting environment, then you will likely come up against the challenge of how to do so while maintaining the ability to iterate efficiently.
The initial challenge is to decide whether to make use of one of the existing, off-the-shelf, solutions for this or to just bite the bullet and build our own automated build pipeline. For example:
Option 1 – Just use OpenShift and save yourself some time.
Option 2 – Use Cloud Foundry, if you’re able to conform to their patterns.
Option 3 – Build it yourself.
If OpenShift or Cloud Foundry aren’t viable options, or if you just like the open source way of doing things and want to find out how it all works ‘under the hood’, then this article is for you.
Reinventing the Wheel?
If you’re blown away by the scope of PaaS solutions like OpenShift or Cloud Foundry, don’ worry – we aren’t going to try and develop anything on that scale. Instead, we will just be making use of existing frameworks and tools to build a pipeline solution that meets our needs.
The main areas, which our pipeline will be concerned with are the following:
- Building application source code – which tools we should use here will depend on the programming languages and frameworks used in the app. Our example will be a simple Node.js web application, written in JavaScript, so the only build process needed will be to obtain all of the required libraries needed to run it.
- Packaging application binaries inside Docker containers, along with any dependent libraries that may be required. The pipeline will also automate versioning and pushing new containers to a suitable repository. We will need to carefully consider which operating system image to use as a base for our containers – unlike OpenShift, for example, we won’t be aiming to automate this decision.
- Running automated tests (unit tests, integration tests, UI tests, etc) against newly-built applications, running in their containers or outside (dev environment only).
- Application monitoring for each of the environments
- Some kind of workflow management, which will allow us to promote container builds from the development, through test, to production environments according to the results of testing and any other gates or checks that we may wish to put into the process.
In other words, we will not be aiming to develop such a broadly-applicable solution as OpenShift, but we will be aiming to automate most tasks that we believe may be performed repeatedly in our particular context. We may also need to setup a very similar kind of pipeline again in future, for other projects perhaps, so we will also aim to make the environment setup as automated or scripted as is practical.
Continuous Delivery Pipeline
The figure below, adapted from Bryant[1], illustrates a typical application delivery pipeline, adapted to accommodate containers.
We’ll now consider each environment separately.
Dev Environment
The dev environment will be self-contained, allowing applications to be developed and tested locally, without any external dependencies.
Developers will, as usual, have the ability to build and run automated tests locally, both before and after container-wrapping an update, using existing tools such as JSUnit or Cucumber. Numerous IDEs and text editors now come with at least some degree of support for building Docker containers, so developers should be free to choose whichever one they prefer to work with, including the Docker command line tools.
Although containers will be built in this stage, the output of it is not a container, but rather the application source code including a Dockerfile. This will be hosted in a git repository of some sort, and it will need to be made accessible to the CI tools described in the next section.
CI Tools
We will make use of a Jenkins server, which will either poll for changes in the git repository or start builds whenever it receives a push trigger (e.g. from GitHub). One of the main aims of this effort is to make setting up new environments quickly and easily, so we will be treating our Jenkins servers as cattle rather than as pets.
As usual, Jenkins follows the same scripts for building, testing and code reviewing the app as are used in the dev environments. Assuming these tests all pass, a container build will then be triggered by Jenkins, using the CloudBees Docker Pipeline Plugin. If this build is successful, the container is uploaded to a container registry server.
This CI-built container image now becomes the single source of truth for the new version of the app, as it is promoted through the pipeline and into production.
QA
At this stage, the containerised app is run in a more integrated environment, most likely along with other application containers hosting any required dependencies, such as databases, web services or service stubs. This will enable performance testing and any non-automated acceptance testing to be carried out, before the new app container is promoted to Staging. In our example here, we will form the QA, Staging and Production environments using Kubernetes hosted on Google Cloud, but of course any other hosting (internal, external or cloud-based) or container orchestration solution could be used, such as Docker Swarm or Apache Mesos, as long as all three environments are as similar as possible.
Staging
In Staging, the new version of the containerised app is made available to a wider audience of users. Some organisations may choose to skip this environment, or combine it with QA into a single Test environment. Others may wish to have more than one Staging environment available, perhaps for different user groups to evaluate and test.
Production
Traditionally, only Operations teams had access to applications running in Production. With DevOps, we aim to bring the teams responsible for developing and running applications together. We will provide tools to enable new versions of an application to be deployed automatically, without any service disruptions. We will also allow developers to view live application logs, so they can see how well their code is running with real users and real data.