• Jan 21, 2020
This series of articles delves into the State of the Art of Container Image Building. We’ve already covered Podman and Buildah, Img, Kaniko, and this time around it’s the turn of Makisu.
Makisu is another open-source image building tool, which was conceived by Uber’s engineering team. Like many other open-source projects, Makisu was developed out of the perceived deficiencies in other similar technologies. In particular, Makisu focuses on optimizing image build times and their size.
Like Kaniko, Makisu doesn’t invoke containers to execute the Dockerfile instructions that define a container image build. It can be run either locally as a standalone binary, or it can be sandboxed in the confines of a container. Its usefulness as a standalone binary is limited, however, as it’s unable to execute RUN Dockerfile instructions. You don’t really want Makisu altering the local filesystem content of the host via a RUN instruction!
In fact, Makisu won’t let this happen by default; you need to specify the flag --modifyfs=true
to allow commands to be run against a filesystem. But, be warned that if you run the standalone Makisu binary with --modifyfs=true
, you’ll end up removing much of the host’s rootfs. Makisu is designed to run in a container, where it’s safe to alter filesystem content.
The Makisu container image that is run to execute a build is minimal in nature. It’s constructed using the scratch base image directive and contains only the Makisu binary and a file containing root CA certificates. A build context (including a Dockerfile) needs to be provided to the container using a volume.
Makisu pulls the base image defined in the Dockerfile and extracts its filesystem inside its container. It also stores a copy of this filesystem in memory. Subsequent build steps are run against the content of this filesystem, which is then scanned for changes. Any changes are also reflected in the ‘in-memory’ copy, and a new ‘diff layer’ is created containing the changes. The diff layers are cached in a directory for use by future builds, which obviously assumes a volume is mounted for the purpose.
The build steps defined in the Dockerfile are executed in this manner to completion, whereupon Makisu will push the built image to a container image registry (if one is specified). If Docker were being used as the container runtime for Makisu, a build container might get invoked using:
$ docker run --rm \
-v $(pwd):/makisu-context \
-v /tmp/makisu-storage:/makisu-storage \
gcr.io/makisu-project/makisu:v0.1.12 build \
--tag=mycorp/my-app:1d03df1 \
--push=quay.io \
--modifyfs=true \
/makisu-context
If you read the previous article in this series, you will have already concluded that Makisu takes an almost identical approach to image building, as Kaniko does. You can execute build steps without a Docker daemon, and without the elevated privileges required to run nested containers. But, where Makisu stands out in comparison, is with its approach to build cache implementation.
Once a decision has been made to ditch the services of the Docker daemon for image builds, you immediately lose its inherent caching capabilities. The caching of build steps that is provided by the Docker daemon may not be as feature-rich as many would like, but caching is an essential feature of image builds. It helps to optimize build times by re-using content produced by identical previously executed build steps. For Uber, this was one of the contributory factors that prompted their decision to create an alternative container image build tool. So, what does Makisu provide by way of caching features?
In a Kubernetes setting, a pod that contains a build container can in theory land on any node within a cluster.
This presents a problem; how can a build container make use of cached image layers produced by a previous build iteration?
We could try and force a pod to land on a node where a previous build iteration was executed, but this encroaches on the role and purpose of the scheduler. Instead, Makisu makes use of a distributed cache to remedy the problem.
First of all, Makisu provides a local mapping between Dockerfile instruction sequences and the digests of diff layers. These mappings are held in a key-value store, which can either be a flat file, a distributed Redis cache, or a generic HTTP-based cache. The important element of this is that the cache is distributed, and therefore available to be referenced by any Makisu build container that has access to the cache.
The mapping in the cache enables Makisu to determine whether an existing build step needs to be executed, or whether it can use the content of an existing layer. If a match is found in the cache, the layer can be unpacked from the local storage managed by Makisu (if it exists in local storage), or pulled from an image registry (if a previous build has been pushed).
The keys in the cache are generated from the Dockerfile instruction for the build step and the keys associated with previous build steps within a build stage. The associated value is the hash of the content of the image layer previously produced. If a build step instruction sequence matches an existing key in the cache, Makisu uses the digest (held as the key’s value) to locate the diff layer.
The cache has a configurable time-to-live (TTL) duration to ensure that cached layers don’t go stale.
During an image build using the Docker daemon, diff layers are generated for each build step that produces or changes content. Lots of build steps that produce content, can lead to the creation of bloated images. Sometimes the number of these intermediate layers can be carefully controlled by judicious use of build stages, or by combining lots of commands into one Dockerfile instruction. Makisu uses its own unique, complementary technique for alleviating this problem.
Makisu’s Dockerfile instruction parser introduces a directive that controls when diff layers are committed during a build. Any instruction that is annotated with the syntax #!COMMIT is interpreted by the parser as a build step that will generate a new layer. Those without, will not generate a new layer.
FROM alpine
RUN apk add --no-cache wget
RUN apk add --no-cache curl #!COMMIT
<SNIP>
In the example above, the RUN instruction that installs wget is not committed as a layer, whereas the one that installs curl is. It creates a layer that includes any new content since the last commit, or from the beginning of the build stage.
This explicit caching behavior is turned on for builds when the --commit=explicit
flag is specified for Makisu. Without it, the #!COMMIT
syntax is treated as a comment, just as it would be by the Docker daemon’s parser. In this way, Dockerfiles meant for use with Makisu’s explicit caching remain compatible with the Docker daemon.
Explicit commits can provide greater flexibility for image builds; fewer layers are created, usually resulting in smaller images, and improved Dockerfile maintainability.
Makisu is a very capable container image building tool, borne out of a genuine need to fix shortcomings experienced in a large-scale engineering environment. Its approach removes the need for elevated privileges during container builds (although builds are executed as the root user), and it has a novel approach to build cache implementation.
It doesn’t tackle the issue of build inefficiency that’s inherent in the sequential parsing of Dockerfile instructions. And the build execution doesn’t always faithfully reflect the expected behavior of Docker image builds. But, Makisu comes from a renowned engineering team and is another great addition to the new breed of container image building tools.
These Stories on Container Image Building
As we draw a line under this series about the state of the art of container image building, it’s worth taking a moment to reflect on what we’ve discovered. In contrast to the early days of the containerization trend made...
In our final post in our series, State of the Art in Container Image Building, let's return to Docker’s Moby project, BuildKit.
Buildah and Podman. Why two tools and what does each bring to the container image building experience? Read further about using Buildah and Podman here.
We empower platform teams to provide internal developer platforms that fuel innovation and fast-paced growth.
GET IN TOUCH
General: hello@giantswarm.io
CERTIFIED SERVICE PROVIDER
No Comments Yet
Let us know what you think