GitOps with Fleet
by Ross Fairbanks on Nov 19, 2021
This article continues our series looking into the growing trend of GitOps, and the software tools that help organizations to automate application and infrastructure delivery. This article explores another GitOps-oriented tool, called Fleet.
Rancher Labs has been one of the most prolific open-source creators during the growth of the cloud-native computing era. Now a part of SUSE, they have been responsible for a number of new software technologies that have helped organizations to adopt the cloud-native paradigm. Perhaps it’s no surprise then, that with the recent popularity of the GitOps approach to application delivery, they have instigated an open-source project to compete with the likes of ArgoCD and Flux. But, as we shall see, the inspiration behind the project, which is called Fleet, relates to a very specific problem scenario.
As a project, Fleet is less mature than the other tools we’ve already covered in this series, and shouldn’t be confused with an earlier open-source project of the same name. The project was originally announced by Rancher Lab’s CTO, Darren Shepherd, in April 2020, and as I write is at a pre-release version; v0.3.7. Clearly, Fleet is still in its early stages of development.
Motivation
Rancher Labs has focused a lot of effort on providing solutions to facilitate edge computing with Kubernetes. A slimmed-down Kubernetes distribution called K3s spearheaded this approach, helping to support production applications running on small, remote clusters. But, a slimmed-down version of Kubernetes is just part of what’s needed to effectively manage applications running at the edge. The task of standing up clusters at the edge, and deploying applications in an automated fashion, is a particularly difficult one. The challenges involve scale, resource limitations, and constrained network bandwidth, amongst others. And, whilst GitOps is designed to achieve the goal of automating the creation of infrastructure and applications using version-controlled declarative configuration, it doesn’t specifically cater to these challenges presented by edge computing.
It’s against this backdrop, then, that Fleet sets out to provide a GitOps experience at the edge. It’s also where it differentiates itself from the other, more popular tools we’ve previously discussed. The project set out to build a tool that could handle, literally, millions of small Kubernetes clusters using the GitOps approach.
So, if remote, resource-constrained Kubernetes clusters can be challenging to administer in a conventional manner, how does Fleet provide the glitter necessary to achieve management GitOps-style? Let’s take a look.
How it Works
GitOps is a pattern that involves teams, processes, and technology. But, at its heart is automation provided by software, and just like most domain-specific software solutions that are provided in a Kubernetes context, it relies on the controller pattern using custom resources. This approach is no different to that employed by ArgoCD and Flux.
Fleet Controllers
All Fleet configurations have a management component that takes care of coordinating the GitOps workflow, which normally runs on a dedicated ‘management’ cluster; it’s called the fleet-controller. The fleet-controller takes care of maintaining a list of authenticated, target client clusters, and retrieving declarative workload configurations from remote git repos. Let’s discuss each of these aspects in turn.
Registration
To ensure secure operation, target clusters need to be registered with the fleet-controller, which they do using a token. Usually, the registration is initiated by another controller running on the target cluster; the fleet-agent. In putting the onus on the agent running in the target cluster, Fleet caters for unreliable, intermittent network activity, which can characterize edge computing. The target cluster(s) communicate with the management cluster, as and when they’re able.
The token, which is generated on the management cluster by the fleet-controller, is used during installation of the fleet-agent in the target cluster (typically, a time-to-live duration is provided when generating the token). The fleet-agent then communicates with the fleet-controller to register itself, and periodically thereafter to retrieve updates from the watched Git repos.
Alternatively, if there’s a requirement to create Kubernetes clusters using declarative configuration according to GitOps principles, then registration can be initiated using the management cluster, instead. This is relevant when using Cluster API, for example.
Git Repositories
For Fleet to know where to find and retrieve declarative configuration from Git repos, it needs to be provided with an instance of a custom resource. Fleet’s version of ArgoCD’s Application resource (containing the source repo URL), or Flux’s GitRepository resource, is the GitRepo resource.
---
apiVersion: fleet.cattle.io/v1alpha1
kind: GitRepo
metadata:
name: podinfo
namespace: clusters
spec:
repo: https://github.com/stefanprodan/podinfo
paths:
- kustomize
targets:
- name: podinfo
clusterSelector:
fleet.cattle.io/cluster: cluster-3560033ce303
An instance of a GitRepo resource provides the location of the repo, a path, an optional branch, and commit or tag, amongst other things. And, with Fleet able to manage multiple clusters, it also needs to provide details of the target cluster or clusters that are the recipients of the retrieved configuration. Target clusters are referenced using selectors to match labels, and can be selected as individual clusters or groups of clusters. The latter allows an application to be deployed to multiple clusters that need to run the same workload.
Bundles
Once a GitRepo resource has been created with the relevant information on the management cluster, the fleet-controller will clone the repo according to the parameters provided. It’ll also check back with the Git repo every 15 seconds for any new changes, whilst a webhook can be configured to trigger asynchronous pulls from the repo. Once the repo’s content has been retrieved, the fleet-controller creates a Bundle, which is a custom resource containing the YAML manifests that need to be applied to the target cluster(s). Cluster-specific variations of Bundles are also created as BundleDeployments. The fleet-agent running on each target cluster will eventually check in with the management cluster for updates, and then download them as necessary. After the configuration has been retrieved, the local fleet-agent applies it to the cluster it’s hosted on.
Config Formats
There are numerous ways to express the declarative configuration required to run applications in Kubernetes, but the most popular techniques involve the use of Helm charts or Kustomize overlays. Not forgetting, of course, the more elemental pure YAML approach. Fleet supports each of these formats as you might expect, but doesn’t provide support for less popular config management techniques, such as those based on Jsonnet. If you’re heavily invested in the use of Tanka, or something similar, then Fleet may not be for you.
When it comes to rendering the raw manifests that get applied to the target clusters, Fleet takes an interesting approach. Irrespective of the original format of the configuration, the fleet-agent dynamically renders the configuration into a Helm chart, before applying it to the local cluster as a Helm release. It’s intriguing because, on the face of it, this doesn’t seem particularly logical (except for config retrieved from a repo as a Helm chart).
I prefer helm to kustomize. The templating in helm is maddening, but the majority of it is unneeded and can be avoided. kustomize is too hard for me to understand after not looking at it for 5 minutes.
— Darren Shepherd (@ibuildthecloud) August 29, 2020
There is no specific explanation for this approach in Fleet’s documentation, but the above tweet from Darren Shepherd sheds some light on the reason. Whilst to many the templating aspects of Helm are a bit messy and convoluted, the packaging of basic application configuration, and its subsequent installation or upgrade by Helm is quite straightforward and works well. So, perhaps the choice of Helm charts as the medium of choice for Fleet is not so controversial after all.
Conclusion
Like most things that emanate from Rancher Labs, Fleet is a well-engineered solution to a tricky problem. But, it’s competing in a hotly-contested space, with the likes of ArgoCD and Flux, especially as these tools are more mature than Fleet at this stage. This may not concern the project too much, as it may ultimately differentiate itself from the competition by other means.
Edge computing is a growing concern that will touch many parts of society in years to come. And, companies who choose Kubernetes as the delivery vehicle for their apps, may well look to Fleet for an edge-friendly GitOps solution. It will be this niche application of GitOps at the edge where Fleet will find the most resonance and uptake.
We love to hear our reader’s stories on the technologies we discuss on our blog. Don’t be shy to share your thoughts on Fleet and its application to edge computing situations!
You May Also Like
These Related Stories
GitOps with Flux
GitOps is getting a lot of attention in the cloud-native community, and in the previous article in this series, we explored the features of ArgoCD. In …
GitOps with ArgoCD
Following on from our introductory article on the subject of GitOps, the aim of this next article is to delve into the first of the GitOps tools we’re …
GitOps with GitLab Agent for Kubernetes
Following our articles on ArgoCD, Flux, and Fleet, this next article explores the features of the GitLab Agent for Kubernetes. The Agent enables a num …