• May 16, 2018
When starting with Kubernetes, learning how to write manifests and bringing them to the apiserver is usually the first step. Most probably kubectl apply
is the command for this.
The nice thing here is that all the things you want to run on the cluster are described precisely and you can easily inspect what will be sent to the apiserver.
After the joy of understanding how this works, it quickly becomes cumbersome to copy your manifests around and edit the same fields over different files to get a slightly adjusted deployment out.
The obvious solution to this is templating, and Helm is the most well-known solution in the Kubernetes ecosystem to help out with this. Most how-tos directly advise you to install the clusterside Tiller component and unfortunately this comes with a bit of operational overhead and even more importantly you also need to take care to secure access to Tiller, since it is a component running in your cluster with full admin rights.
If you want to see what actually will be sent to the cluster you can leave out Tiller and use Helm locally just for the templating and using kubectl apply
in the end.
There is no need for Tiller and there are roughly three steps to follow:
This way you benefit from the large amount of maintained Charts the community is building, but have all the building blocks of an application in front of you. When keeping them in a git repo it is easy to compare changes from new releases with the current manifests you used to deploy on your cluster. This approach might nowadays be called GitOps.
A possible directory structure could look like this:
kubernetes-deployment/
charts/
values/
manifests/
For the following steps the helm client needs to be installed locally.
To fetch the source code of the charts the url to the repository is needed, also the chart name and the wanted version:
helm fetch \
--repo https://kubernetes-charts.storage.googleapis.com \
--untar \
--untardir ./charts \
--version 5.5.3 \
prometheus
After this the template files can be inspected under ./charts/prometheus
.
The default values.yaml
should be copied to a different location for editing so it is not overwritten when updating the chart source.
cp ./charts/prometheus/values.yaml \
./values/prometheus.yaml
The copied prometheus.yaml
can now be adjusted as needed. To render the manifests from the template source with the potentially edited values file:
helm template \
--values ./values/prometheus.yaml \
--output-dir ./manifests \
./charts/prometheus
Now the resulting manifests can be thoroughly inspected and finally be applied to the cluster:
kubectl apply --recursive --filename ./manifests/prometheus
With just the standard helm
command we can closely check the whole chain from the charts content to the app coming up on our cluster. To make these steps even more easy I have put them in a simple plugin for helm and named it nomagic.
There might be dragons. It might be, that an application needs different kinds of resources that depend on each other. For example applying a Deployment
that references a ServiceAccount
won’t work until that is present. As a workaround the filename for the ServiceAccounts manifest unter manifests/
could be prefixed with 1-
since kubectl apply
progresses over files in alphabetical order. This is not needed in setups with Tiller, so it is usually not considered in the upstream charts. Alternatively run kubectl apply
twice to create all independent objects in the first run. The dependent ones will show up after the second run.
And obviously you lose features that Tiller itself provides. According to the Helm 3 Design Proposal these will be provided in the long run by the Helm client itself and an optional Helm controller. With the release of Helm 3 the nomagic plugin won’t be needed, but it also might not function any more since plugins need to be implemented in Lua. So grab it while it’s useful!
Please share your thoughts about this, other caveats or ideas to improve.
And as always: If you’re interested in how Giant Swarm can run your Kubernetes on-prem or in the cloud contact us today to get started.
These Stories on Tech
A look into the future of cloud native with Giant Swarm.
A look into the future of cloud native with Giant Swarm.
A Technical Product Owner explores Kubernetes Gateway API from a Giant Swarm perspective.
We empower platform teams to provide internal developer platforms that fuel innovation and fast-paced growth.
GET IN TOUCH
General: hello@giantswarm.io
CERTIFIED SERVICE PROVIDER
No Comments Yet
Let us know what you think