Rolling Updates Of Kubernetes On Top Of Fleet
by Puja Abbassi on Apr 19, 2016
As some of you might have seen, we recently released a whole set of provisioning tools for CoreOS fleet clusters. Now, when you have a fleet cluster, the next step would be deploying unit files on it using fleetctl
. However, that process gets quite tedious especially with groups of units that should go together. Further, once you go into more complex services, where you want to deploy and maybe even update several of these groups, you will quickly feel the need for a bit more functionality and ease-of-use. This is where Inago comes into play - a deployment tool that manages and updates services on fleet clusters.
Enter Inago
We built Inago from the ground up, based on what we learned building and maintaining a similar internal tool over the last 2 years as well as the experience we gathered deploying and managing production microservice infrastructures on top of fleet. Inago basically forms an abstraction layer on top of groups of unit files, i.e. treating each group as a single object, and provides some sugar on top of what fleetctl
offers.
Keep in mind, what we show here is just a first version, which will be extended over time. Further along the way, we will offer an accompanying tool that helps you prepare your services as unit files for deployment with Inago. Also, the intended use of this is rather deploying and managing infrastructure services from single modules like elasticsearch or prometheus to whole orchestration frameworks like Kubernetes and Docker Swarm, allowing you to manage the full lifecycle of your infrastructure on top of fleet.
You can think of this model as two layers to form a microservice infrastructure. The first layer is basic OS and cluster management, where we choose fleet as it is pretty flexible and lightweight. Bootstrapping this layer can be easily automated with our previously released tools. On top of that Inago (and some future tools to come) comprise a second layer, where you manage the actual infrastructure components running in containers on top of fleet clusters. This way you get a manageable container-native infrastructure that can be updated and extended without downtimes.
Using Inago is pretty straightforward. Each service you want to deploy with Inago should be located in its own folder. Inside this folder you have a group of unit files that work together to form your service. Once you have your unit files ready you can use Inago to submit, start, stop, destroy, and even update the group with single commands. The actual unit files below are abstracted away.
Deploying Kubernetes With Inago
We’ll have a look at the workings using a simple Kubernetes deployment as an example. Deploying Kubernetes actually means deploying (at least) two services, a master and a node. We have prepared some simplified units for these two as group folders to be used with Inago directly. The Kubernetes master in this simple example is not set up for high availability, i.e. will be scheduled on a single machine only. The node group consists of 2 units, the kubelet and the proxy, which carry an @
at the end of the unit files indicating they can be sliced, i.e. scaled over several nodes. Sliced groups can be started with a scaling parameter to indicate how many slices we want to have deployed. Adding a Conflicts
statement in the unit files of the group, will schedule each group slice onto a separate machine.
Now back to our Kubernetes deployment. In this example we’re using a 3 node fleet cluster similar to one you can get with the official CoreOS Vagrant box. You can also use Mayu or Kocho to get a CoreOS cluster running on bare metal or AWS. We use inagoctl
to start up our groups.
But first, we need to set up the network environment, so our Kubernetes deployment will work nicely together. This, needs to be done only once, but on all nodes, which is why we set this up to be a global unit (running on all machines).
$ inagoctl up k8s-network
2016-04-14 15:31:05.273 | INFO | context.Background: Succeeded to submit group 'k8s-master'.
2016-04-14 15:32:01.532 | INFO | context.Background: Succeeded to start group 'k8s-network'.
Then we can start the master group.
$ inagoctl up k8s-master
2016-04-14 15:33:09.396 | INFO | context.Background: Succeeded to submit group 'k8s-master'.
2016-04-14 15:35:01.558 | INFO | context.Background: Succeeded to start group 'k8s-master'.
You can check if it’s all up with inagoctl status k8s-master
. Next, we start a triplet (remember we have 3 nodes?) of Kubernetes nodes.
$ inagoctl up k8s-node 3
2016-04-14 15:41:26.824 | INFO | context.Background: Succeeded to submit group 'k8s-node'.
2016-04-14 15:41:36.847 | INFO | context.Background: Succeeded to start 3 slices for group 'k8s-node'
As you see the group got submitted and subsequently started as 3 slices. We can use the status
command to inspect the state of the group as well as which machine they got scheduled onto.
$ inagoctl status k8s-node
Group Units FDState FCState SAState IP Machine
k8s-node@59a * launched launched active 172.17.8.102 99e1c5b536b34418bd2c275e3b9fc5f3
k8s-node@7fa * launched launched active 172.17.8.101 fe7e8c62be3943d1ba28849d20193c8b
k8s-node@d25 * launched launched active 172.17.8.103 3bade1a44b0c434ea03f7c110811bfc1
You can also use the verbose flag -v
to see details on all unit files as well as hashes for each one of them. This can be interesting to check if updates have gone through for example.
Testing Our Kubernetes
Now we can check if our Kubernetes is actually running with kubectl
.
Note: We are running kubectl
on the same node as the API server here. If you’re running on a different node, you can use the -s
flag to connect to the right one. For finding the right node, login to a CoreOS machine and run
$ fleetctl list-units -fields=unit,machine --full --no-legend 2>/dev/null | grep ^k8s-master-api-server.service | cut -d/ -f2 | paste -d, -s
172.17.8.102
Let’s check the cluster state.
$ kubectl cluster-info
Kubernetes master is running at http://localhost:8080
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.2", GitCommit:"528f879e7d3790ea4287687ef0ab3f2a01cc2718", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"5cb86ee022267586db386f62781338b0483733b3", GitTreeState:"clean"}
A further kubectl describe nodes
will show that we’re running 3 nodes with kubelet and proxy version 1.2.0 deployed.
Everything is ready to deploy our first pod. I prepared a little container based on the Kubernetes hello world example that we can start with
$ kubectl run hello-node --image=puja/k8s-hello-node:v1 --port=8080
This shouldn’t take much, and we can check if it is running.
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
hello-node 1 1 1 1 1m
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-node-3488712740-zponq 1/1 Running 0 1m
As we have 3 nodes, let’s scale the pod up to 3 replicas.
$ kubectl scale deployment hello-node --replicas=3
Wait a while and check again.
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
hello-node 3 3 3 3 6m
Voilà , we have 3 replicas of our hello world pod available.
Updating Kubernetes With Inago
Now that we have a running Kubernetes, we might at some point want to update it. With Inago this is a simple two step approach, much like you would update a pod running on Kubernetes.
Updating the Kubernetes Nodes
First, we modify the unit files to use the newer version. The Environment lines should look like following.
Environment="IMAGE=giantswarm/k8s-kubelet:1.2.2"
and
Environment="IMAGE=giantswarm/k8s-proxy:1.2.2"
Next, we use the update
command to perform the update of the group, which works in a similar way to the rolling updates of kubectl
, but for your underlying infrastructure services - here for Kubernetes itself. This will replace all the currently running instances with new updated ones.
$ inagoctl update k8s-node --max-growth=0 --min-alive=2 --ready-secs=60
2016-04-14 15:45:22.988 | INFO | context.Background: Succeeded to update 3 slices for group 'k8s-node': [045 3a9 79b].
The arguments used here mean that Inago is allowed to create 0 additional group slices during the update, which can’t be higher in our example, as we only have 3 nodes in the cluster and each k8s-node group needs to be scheduled on a different machine. Further, we tell Inago to keep at least 2 instances of the group running at any time, so we don’t have any downtime. Last but not least, the ready-secs
flag determines how long Inago waits between rounds of updating. This gives the new node a bit of time to start replicating that hello-node pod we scheduled.
Watching kubectl get deployments
during the update process, we will see that available instances of our hello-node pod shortly go down to 2, while a Kubernetes node is being updated and then quickly come up again to 3. After a while, all nodes will be updated and another kubectl describe nodes
will show that we’re now running 3 nodes with kubelet and proxy version 1.2.2 deployed.
Updating the Kubernetes Master
As we deployed the master as a single (non-sliced) group, we won’t be able to update it without downtime. Further, Inago’s update command currently does not support non-sliced groups. Still, an update using a few Inago commands is possible. We will experience a short downtime. However, as Kubernetes nodes are (at least partly) independent from their master, our pods should stay online, only the API won’t be reachable for a short time.
The process is quite simple. Again first, we need to edit our unit files to be using the newer image tag similar to when we updated the nodes. Then the update itself is a simple trio of commands.
$ inagoctl stop k8s-master
2016-04-14 16:32:08.465 | INFO | context.Background: Succeeded to stop group 'k8s-master'.
$ inagoctl destroy k8s-master
2016-04-14 16:32:27.384 | INFO | context.Background: Succeeded to destroy group 'k8s-master'.
$ inagoctl up k8s-master
2016-04-14 16:35:23.503 | INFO | context.Background: Succeeded to submit group 'k8s-master'.
2016-04-14 16:36:30.600 | INFO | context.Background: Succeeded to start group 'k8s-master'.
A quick look at kubectl version
will tell us that we’re now running a 1.2.2 server.
That’s all, folks!
We have managed to start up a Kubernetes cluster, start a replicated pod on it, and perform a rolling update of the Kubernetes nodes without downtimes of both Kubernetes and the running pod, all thanks to Inago. The techniques used here are in no way specific to Kubernetes. Due to Inago building on top of fleet and systemd unit files, all kind of distributed systems can be orchestrated with Inago. You can find some more examples incl. a simple Elasticsearch setup in the examples folder on GitHub.
Inago is still in its early days and we already have some ideas in which directions we want to develop it (or also some accompanying tools) further. We’d love to learn from your experience and hear what your pain points are. Checkout the GitHub repo and take Inago for a spin on your clusters. We’re always happy for some feedback.
You May Also Like
These Related Stories
Benchmarking Fleet Clusters With Rkt And Docker
On Tuesday we showed you how you can simulate CoreOS clusters on a single machine with Onsho. We also mentioned that we are working on improving fleet …
Meet Kocho - Our Bootstrapping Tool For CoreOS Clusters On AWS
Last week we made Mayu and Yochu, a set of tools to set up customized CoreOS clusters on bare metal, available as open source. Today we are introducin …
Securing the Configuration of Kubernetes Cluster Components
In the previous article of this series Securing Kubernetes for Cloud Native Applications, we discussed what needs to be considered when securing the i …