So far in the blog series about CI/CD, we have looked at the personas involved in CI/CD and the benefits of using Docker containers to create a consistent environment for developers and Devops engineers. In this blog, I will show you how I use Docker and the make utility to execute the CI related steps for my hugo based blogs. Specifically we will see how code is built and tested as a developer. Writing and publishing a new blog is similar to coding and releasing a new feature in a software product. And I will highlight the similarities in this article from the perspective of a developer.

Using Makefile to build and test the generated static files

The make utility and the associated Makefile allows developers to define and execute a set of commands related to building and testing their code. Just as Docker provides a consistent development environment, Makefile standardizes the targets (set of commands) used for building and testing software. The idea is that all developers and DevOps engineers use the same set of commands as defined in the Makefile.

Here is the Makefile and the targets I use for my blogs:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
BUILD_DIR ?= /tmp
 
hugozip:
	hugo --themesDir ${BUILD_DIR}/themes/ -d ${BUILD_DIR}/build/blogs
	cd ${BUILD_DIR}/build; zip blog.out.zip -r blogs; mv blog.out.zip ${OUTPUT_DIR}/`date +"%Y_%m_%d_%H_%M_%S"`_blog_output.zip
 
hugolocal:
	hugo server -D -v --noHTTPCache --bind 0.0.0.0 --debug --themesDir ${BUILD_DIR}/themes/
 
clean:
	rm -rf ${BUILD_DIR}/*

Let me walk you through the various Makefile targets:

  • The hugolocal target is used to test the blog changes locally. Hugo has a built-in webserver so that we can perform quick tests. This target will be used for unit testing of my blogs.
  • The hugozip target is used to create the artifact, which in my case is a ZIP file containing the static HTML files generated by hugo. The idea is to test the artifacts in an environment that matches the deployment environment.
  • The target called clean helps you cleanup all the generated files within the Docker container. You may not need this command if you restart your Docker container everytime.

The step-by-step process

Now let us look at the various steps in the process of building and testing my blog. Writing a new blog is somewhat similar in process to building a new feature in a software product. And I will highlight these similarities in the steps below. We will be using the Docker file defined in the previous blog and the Makefile mentioned above.

Installing the prerequisites

As discussed in the previous blog the environment and software packages needed for my blogging is defined as a Docker container. Therefore on my workstation (Docker host), I need to install only two software packages. We need to install git for working with source code repository and install Docker.

Writing code (or blog)

The next step is to check out the source code and use git commands, then create a branch for your feature (or blog) and write your code. As a developer, you can use any IDE on your workstation to write your code (or blog).

In real world projects, you would have done an analysis of requirements, written a design document for a given feature before starting to write code.

Unit Test the code (or blog)

This is the first step where the Docker and the Makefile based environment comes into picture. As a developer you run the Docker container and invoke the make command with a specific target for running unit tests. Here is the docker command for running the hugolocal make target:

1
docker run --rm -it --name srihugo -v $PWD:/cbblogs:ro -v $PWD/blogoutput:/cbblogs/blogoutput:rw --expose 1313 --network host cloudbuilder/hugo:0.1 make hugolocal
The above command assumes that you have created a Docker image with the name cloudbuild/hugo and a label called 0.1. In this command we mount the source code directory (with the changes done earlier) into the Docker container using the Docker volume mount approach. And then ask Docker to execute the make hugolocal command inside the Docker container. As described earlier, the hugolocal target starts the built-in webserver on port 1313. And this port is exposed from the Docker container. You can point your browser to http://your_docker_host_ip:1313 to view the blogs and verify the changes. This step can be correlated to the unit tests that developers need to run before committing code.

In real world projects, unit testing is a very critical step that allows developers to verify if the code written for the feature works correctly or not. At this point you are not testing the feature per se, but you are testing whether the building blocks of the feature (the code) is robust or not.

Commit your code

As a developer, once you are satisfied with your changes, you need to commit code changes to your source code repository. Note: your code is not yet merged back to the main respository. In a distributed source code control system like git, this would mean committing code to your local copy of the repository. You will execute git commands on your workstation to accomplish this step.

In real world projects, you may also perform an additional step of syncing your local copy of the code to the latest in the main repository. This is needed to ensure that your code changes are done on top of the latest code in the main repostory.

Create build artifacts

While Unit testing provides some amount of quality control by testing various code paths, it is also important to test the feature as a whole. To do this, we should first create the artifact and run thorough feature tests using the artifacts. In my case, here is the docker command I use to generate the ZIP file containing the statically generated HTML files:

1
docker run --rm -it --name srihugo -v $PWD:/cbblogs:ro -v $PWD/blogoutput:/cbblogs/blogoutput:rw  cloudbuilder/hugo:0.1 make OUTPUT_DIR=/cbblogs/blogoutput hugozip
The hugozip Makefile target uses hugo to generate the static HTML files in the directory /cbblogs/blogoutput. Note that this directory is mounted from the workstation (Docker host) into the Docker container. That means that the output is available on the workstation also.

Test the artifacts

The generated artifacts need to be tested in a non-developer environment. The unit tests were executed in a docker container meant for developers and it used hugo itself to test the new blog. Now it is important to test the new blog after it was generated as a static HTML file. In the case of my blogs, I unzip and copy the static HTML files into a tmp directory and use nginx web server to test the generated static HTML files. Here is the docker command to do this:

1
2
3
docker pull nginx

docker run -it --rm -p 8080:80 --name test-hugo-output -v /tmp/blogs:/usr/share/nginx/html/blogs:ro -d nginx
The docker pull command fetches the latest nginx container image from the Docker hub. And the docker run command mounts the generated HTML files into the nginx default folder for web hosting. The docker command also points port 8080 on the host machine to the port 80 inside the container used by nginx. You can now point your browsers to http://:8080 to view the blogs and verify the changes.

This step is akin to performing feature tests in a real-world project. You deploy the artifacts into an enviroment that resembles the final production environment and run your feature tests in that environment.

Merge the code to the main repository

Once the artifacts are verified in a local environment, the next step is to merge the code changes (or the feature) into the main source code repository and also delete the branch created for this feature.

In real world environment, this step will include creating a merge request. The request should also include all the unit and feature test results. The peer developers will then review the source code changes and approve the new feature to be merged to the main respository. Some additional automated tests might also be executed as part of this process.

What comes next?

This blog covered the CI process from the perspective of a developer. Next we will see CI process from the perspective of DevOps and the CI system like Jenkins. Some of the steps performed by the developers will be repeated by the CI system but in a different environment and context. And then the CI process will conclude by generating an official artifact. This official artifact will go through some more testing and will then be deployed to production.