Meet Kocho - Our Bootstrapping Tool For CoreOS Clusters On AWS

by Puja Abbassi on Feb 16, 2016

<span id="hs_cos_wrapper_name" class="hs_cos_wrapper hs_cos_wrapper_meta_field hs_cos_wrapper_type_text" style="" data-hs-cos-general-type="meta_field" data-hs-cos-type="text" >Meet Kocho - Our Bootstrapping Tool For CoreOS Clusters On AWS</span>

Last week we made Mayu and Yochu, a set of tools to set up customized CoreOS clusters on bare metal, available as open source. Today we are introducing Kocho, Mayu’s cloudy sibling in our family of CoreOS provisioning tools.


Bootstrap AWS CoreOS Clusters with Kocho


Like Mayu, Kocho sets up a fleet cluster with CoreOS nodes, but instead of bare metal it does that on AWS. It controls AWS CloudFormation over the AWS API to set up the cluster based on CloudConfig and CloudFormation templates.

Additionally, it can set up CloudFlare in front of the cluster for domain resolution. It can further integrate with Yochu (just like Mayu can) to set up fleet meta-data, and custom versions of fleet, etcd, and Docker on the CoreOS nodes. Furthermore, it has a built in Slack integration, that tells your team who created (or destroyed) what kind of cluster and when, which helps keep track of your clusters especially when you, like us, work in a distributed team.

For different use cases we have created templates for two types of clusters that you can set up with Kocho. You can adjust these templates to your individual needs or come up with your own.

The first and rather simple type is a standalone CoreOS cluster with fleet and a full etcd quorum. Here all nodes are set up similarly and also participate in the etcd quorum.

The second and more interesting type is a kind of split CoreOS cluster, where you first set up a primary cluster, which runs an active etcd quorum and can be used for setting up service discovery and other central cluster tooling, and then adding one or more secondary clusters that run etcd in proxy mode and can be used as compute clusters for the actual applications and services you or your developers want to run. This production setup is also described in the CoreOS documentation and allows us to have several separate compute clusters (here called secondary) and deploy to them selectively based on their fleet meta-data. Expect more about our model of running clusters and some details on our CloudFormation setup in a follow-up blog post soon.


Excursion Into Deploying Custom Binaries


As mentioned above you can use Kocho (or Mayu) in combination with Yochu and as we didn’t go into much detail about how Yochu actually does this “deploying of custom versions of fleet, etcd, and Docker”, we thought we’d give a quick walk through through the inner workings of this tool.

Yochu is a single unit file that you can put into your CloudConfig to run on every boot of the machine. What Yochu basically does is the following three steps:

  1. It starts an OverlayFS unit that mounts over /usr/bin/.
  2. It downloads custom binaries of said tools over HTTPS (e.g. from S3 or Mayu) into the OverlayFS.
  3. It restarts the units for etcd, fleet, and Docker to ensure that the system is using the customized versions deployed in step 2.

This procedure ensures that the underlying CoreOS stays untouched and that not only the provisioned custom tools but also the deployed CoreOS version can be changed completely independent of each other. Another way to do this would be compiling custom CoreOS images. However, this takes away some degrees of flexibility and speed of deployment. Especially in development and testing environments this speed and flexibility can come in handy. You can for example try out a different version of Docker by just editing the Yochu unit file and restarting it. You don’t even need to reboot. If you want to go back to the versions that come with the underlying vanilla CoreOS you can just stop the OverlayFS unit. This also ensures that you always have the fallback of using the original tools in case e.g. the OverlayFS fails.


Try It Out


If quickly setting up custom CoreOS clusters on AWS sounds good to you, you can try it out directly. You just need AWS credentials and then follow the instructions in the GitHub repo or in the docs. Your cluster will be up in no time. As always we are happy to hear your ideas and see what you would use this tool for. You could for example extend it to support your choice of cloud provider. Feel free to contribute in form of issues and PRs. You can also join us in our mailing list (giantswarm) or on IRC (#giantswarm on freenode.org) to have a little chat.