• Apr 16, 2020
Generally, cloud-native applications are characterized by small, independent units of functionality called microservices packaged into containers. Architecting applications as microservices promises many benefits, but deploying them requires a lot of careful consideration.
The deployment needs to be automated, resilient, scalable, discoverable, and so on. A Kubernetes cluster is often the preferred delivery host for these containerized workloads because it satisfies all of these operational requirements straight out of the box.
Kubernetes operates using a declarative model; a user defines a desired ‘state of the world’ and Kubernetes takes care of transforming the actual state of the world into the desired state. To define the ‘state of the world’ means to define an application and the manner in which it should operate within a cluster. To help with this, Kubernetes exposes an API that supports a number of different resources for abstracting workloads and the manner in which they should run. The definitions of these resources are predominantly made with YAML-based manifests, although JSON definitions are also supported.
For the DevOps teams responsible for defining cloud-native applications, this means the creation of a number of different YAML files, or a large single file containing multiple definitions. A complete definition of a discrete service might consist of not only a workload abstraction, such as a Deployment, but also a Namespace, a ServiceAccount, a Role, a RoleBinding, a Service, an Ingress, ConfigMaps, Secrets, and much more besides. These definitions get multiplied when applications consist of multiple services and start to mushroom when a cluster is home to numerous applications.
Authoring these definitions is no mean feat, but it’s also a challenge to manage their ongoing existence and evolution. That’s why it’s important to manage these definitions in a controlled manner, which can be easily achieved using a source code version-control system, such as Git. It provides the ‘single source of truth’ when configuration drifts due to changes applied imperatively and is a familiar system of change management for DevOps teams.
But, controlling the change to the desired state is just one aspect of the thorny issue of application configuration management in Kubernetes.
Yes, defining the entire configuration of a multi-service application can be onerous, but provided the definitions are controlled and their evolution is managed, the hard work pays off. As new features and fixes traverse the continuous integration/delivery (CI/CD) pipeline, amended definitions are tested, committed and applied, and Kubernetes works off these definitions to reliably and securely host the application.
However, problems start to arise when we start to consider how we need to work with applications beyond the straightforward deployment scenario.
Let’s consider a few alternative scenarios:
Whilst the definition of the application should remain largely unchanged for each environment, it’s entirely reasonable to expect that some small configuration changes will be required for each environment. A service in one environment might use a ConfigMap for storing the endpoint details to access a test database, whilst the equivalent service in a production setting might require a different ConfigMap for a production database.
How can we largely provide the same configuration, but tweak it for different environments?
The simplest approach might be to replicate this common configuration for each application, but is there a more subtle, efficient, and composable technique for handling this?
How can shared configuration be updated seamlessly without reworking every individually affected item?
How can organizations ensure that best practice is followed at all times?
One application consumer will run the application one way, and another in an entirely different way. Hence, providing a single set of YAML definitions won’t suffice.
How can a software provider implement a set of configuration definitions without compromising the many different scenarios that might be required by end-users?
These are tricky problems to resolve, and it’s fair to say that there is no universally accepted solution in the Kubernetes community. There are techniques and tools that attempt to address the conundrum, however, and we’ll be getting to these in the upcoming articles in this series. But, first, let’s look at some of the approaches employed to try and fix application configuration management in Kubernetes.
Generally, there are four different approaches to handling application configuration management in Kubernetes. Each attempt to address one or more aspects of the configuration conundrum.
Replicate and Customize
This is by far the easiest approach, and simply involves the replication of an existing, valid definition. The definition can then be customized to suit the purpose at hand. Whilst this approach harnesses the endeavors of those that have gone before and involves little work to bring about fitness for purpose, it’s also the least flexible.
If the original definition is considered to be the ‘best in class’ configuration for the application in question, it’s highly likely that any subsequent changes by its author are going to be important and relevant. The replicated and customized copy is going to need reworking to reflect the changes. It falls on the consumer to monitor upstream for changes and to apply those changes when they occur.
This approach quickly becomes untenable when it’s adopted wholesale, and for that reason, it’s generally considered impractical.
Parameterized Templating
In this approach, resource definitions are provided for applications, but they are templated to account for customization requirements. Parameters inserted into a template can either make use of sane default values or apply a user-provided alternative. This caters to a generic configuration for an application but also allows for flexibility in catering for alternative scenarios. The level of flexibility afforded is directly related to how much of the template is parameterized.
The Helm package manager is an example of a community tool that applies this approach.
Overlay Configuration
Another approach to application configuration management is to use overlay configuration. In this approach, a ‘base’ configuration is provided which reflects the generic configuration for an application. However, the base configuration can be ‘overlayed’ with snippets of customized configuration to nuance the definition for a specific purpose. The base is effectively merged with the overlay. In this way, overlay configuration can be used to handle multiple deployment scenarios for an application.
Kustomize is one such overlay configuration tool and is built into the native Kubernetes CLI, kubectl.
Programmatic Configuration
The final school of thought when it comes to application configuration management is a programmatic one. The essence of this approach is to either use a purpose-built Domain Specific Language (DSL) or a more generic programming language. This enables the creation of a default definition for an application’s configuration, but also provides for replacing default configuration items with specific values to suit the purpose. Perhaps the most attractive benefit from this approach is the ability to use familiar programming constructs like conditionals, loops, and functions. Programmatic configuration also aids in the reuse of configuration definitions.
Examples of this approach include jk, Dhall, Cue, Tanka, and Pulumi.
Kubernetes 1.0 was released in the middle of 2015, and whilst much has changed in the intervening years, its declarative model remains exactly the same. Despite this, the resolution to the application configuration management conundrum remains.
The fact that various tools and approaches have come and gone over the years, or have at least endured major functional changes, is testament to this fact. There is copious activity, healthy debate, and many strong views held within the community, which all bodes well for an eventual solution to the problem. Better still, we may end up with several alternative solutions from which to choose.
The approaches to application configuration management in Kubernetes that are highlighted in this article, are supported by many different community tools. No one approach is right or wrong, and there are scores of different solutions to choose from. In this series of articles, we’ll take a closer look at some of the most popular and interesting technologies that seek to address the configuration management problem.
We’ll be exploring the following tools, to see what they offer, and to understand their strengths and weaknesses:
Tanka
Pulumi
Kpt
Stay tuned for the first in the series and let us know if you have any questions on Twitter!
These Stories on Tech
A look into the future of cloud native with Giant Swarm.
A look into the future of cloud native with Giant Swarm.
A Technical Product Owner explores Kubernetes Gateway API from a Giant Swarm perspective.
We empower platform teams to provide internal developer platforms that fuel innovation and fast-paced growth.
GET IN TOUCH
General: hello@giantswarm.io
CERTIFIED SERVICE PROVIDER
No Comments Yet
Let us know what you think