• Nov 9, 2021
GitOps is getting a lot of attention in the cloud-native community, and in the previous article in this series, we explored the features of ArgoCD. In this next article, we’ll take a look at ArgoCD’s biggest competitor (arguably) in terms of tooling, Flux. Specifically, we’ll be exploring Flux v2 in this post. Just like ArgoCD, the open-source Flux project aims to provide automated management of application and infrastructure deployments, using declarative configuration stored in a version control system (e.g. Git).
In fact Giant Swarm is a recent adopter of Flux! We initially used ArgoCD as we liked its config management plugin support. However, we now use both Flux internally and offer it as a managed service in our management clusters for managing apps and clusters. Choosing between ArgoCD and Flux is not a straightforward decision but in our case our multi tenancy requirements make Flux the better choice.
Since the project’s inception, Flux has had an interesting journey, so let’s start by unravelling the story.
Flux originated from a company called Weaveworks, which is widely recognized for coining the term ‘GitOps’ back in 2017. During the course of its existence, Flux has had two different incarnations, a v1 and a v2. Not just a normal server progression, however, but two distinct versions that are entirely different, and incompatible. Flux v1 had a degree of success, with a healthy community, and a sizeable number of adopters. But, in 2020 the project recognized a number of shortcomings in the feature-set and decided to re-engineer Flux from the ground up as v2. A single deployment of Flux v1 is limited to watching a single git repository, for example, whilst v2 isn’t constrained in this way.
Interestingly, the ArgoCD and Flux projects collaborated fleetingly on a side project called the ‘GitOps Engine’, whose purpose was to share some of the core features of the GitOps process. This was seen as a move toward ‘democratizing GitOps’. However, the Flux project eventually decided to go their own way, and dropped their involvement with the GitOps Engine, preferring to chart their own course.
As I write, Flux v2 is close to general release, with its APIs now stable. Because of the architectural differences between v1 and v2, detailed migration documentation has been published by the project for those organizations currently running Flux v1. Inevitably, with changes of this size, care should be exercised during migration, and until v2 reaches GA status, further breaking changes could still arise.
Like the Argo family of projects, Flux is an incubating project of the Cloud Native Computing Foundation, and has a number of early adopters already.
Let’s have a look at how it works.
One of the main aims of the big re-write was to separate out the core features into a number of discrete components that follow the Kubernetes controller pattern. Collectively, the components are termed the ‘GitOps Toolkit’, which shouldn’t be confused with the already mentioned ‘GitOps Engine’. In separating out the features into a number of loosely-coupled components, the project has sought to reap the benefits associated with microservices adoption and has provided interested parties with a route to reuse the code in other, related projects.
Like most Kubernetes controllers, the GitOps toolkit components extend the Kubernetes API with custom resource definitions (CRDs) that describe the domain at hand. Let’s take a look at the individual components that make up the toolkit, and the CRDs they introduce and make use of.
For a workflow to function according to GitOps principles, the tooling that’s used needs a means of retrieving declarative configuration from a versioned source. Flux’s toolkit component that manages sources is the aptly-named source-controller
, which works with several different CRDs according to the source type. Typically, this is a Git or Helm chart repository, but the source-controller supports S3-compatible storage as a source too. Flux uses the CRDs to describe instances of each of these source types, and when applied to a cluster where Flux is running, the source-controller monitors the remote source described by the CRD instance and retrieves the content in the source as it changes over time.
---
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: HelmRepository
metadata:
name: ingress-nginx
namespace: kube-system
spec:
interval: 1m0s
url: https://kubernetes.github.io/ingress-nginx
Here’s an example of a HelmRepository source, that describes a repo that contains the chart for the community maintained Nginx ingress controller.
The source-controller is used to retrieve a source’s content, but not to apply it to the cluster; this is the job of different controllers. The configuration for a Kubernetes application is often expressed as pure YAML in files, or Kustomize overlays when multiple configurations are required. Flux’s controller for automating the application of such configuration retrieved from sources is the kustomize-controller
. It periodically applies what is referenced in a Kustomization
CRD (in its simplest form, a path in a source managed by the source-controller), or any plain Kubernetes manifests it finds in the absence of the Kustomization CRD.
Not everyone is sold on the Kustomize overlay approach for managing Kubernetes configuration, however, and the big alternative is the Helm chart abstraction (amongst some other techniques). Clearly, a controller that is built on the Kustomize approach is not going to cope with Helm charts by way of applying configuration to Kubernetes clusters. What’s needed is something that ‘understands’ Helm charts, and Flux employs a dedicated controller to handle the creation of Helm releases for Helm charts, called the helm-controller
. Flux v1 had an equivalent component called the helm-operator
, and the helm-controller takes much inspiration from this early incarnation.
The helm-controller watches for instances of HelmRelease
CRDs, which describe the characteristics of a Helm release, including the chart to install or upgrade within a given source. So, a DevOps engineer just needs to define an application using a HelmRelease custom resource, and when applied to the cluster helm-controller will take care of retrieving the chart from its source, and installing or upgrading it in the cluster. The source-controller continues to monitor the source, and any updates to the chart or values are subsequently reflected in the cluster by Flux.
A feature that has followed through from Flux v1 to V2, is the automatic update of image tags for workloads. This feature is not mandatory to use but is available if you’re sufficiently confident in your testing to allow Flux to monitor image tags for subsequent automated application updates in-cluster. If enabled, Flux monitors and retrieves tags from an image repo in a remote container registry (according to set policy) and then updates the appropriate manifest in the watched Git repo. From here, the GitOps control loop kicks in, retrieves the change, and then applies it to the cluster.
In effect, the use of the image automation controllers removes the need to manually commit/merge a configuration change to the watched source, handing over the responsibility to Flux to perform this task instead. This level of automation isn’t for everyone, as it necessarily relies on a mature CI/CD pipeline for success.
A question that might spring to mind is, how can Flux be managed in a GitOps fashion? In fact, this question is relevant for all tools that claim to implement GitOps workflows. After all, GitOps tools are just Kubernetes applications, just like the applications they seek to manage on our behalf. This is a classic chicken or the egg scenario.
The Flux project provides a bootstrapping mechanism, which is implemented using a dedicated CLI sub-command (flux bootstrap
), or a project-specific Terraform provider. Both mechanisms use the same code base and their use results in:
The bootstrapping mechanism is idempotent, and is useful for both, creating and re-creating clusters. The Argo project has a similar, fledgling mechanism called Autopilot.
For progressive deployments where application releases are carefully managed into production, Flux works in conjunction with another open-source project, Flagger. Flagger is akin to Argo Rollouts and enables fine-grained control of application releases in the form of canaries, blue/green deployments, and A/B testing. Flagger has existed as a standalone project since its inception in 2018, but it has recently been brought under the Flux umbrella and is a natural enhancement to the GitOps capabilities that Flux provides.
Flux is a first-class GitOps solution with a strong pedigree and a project community of very talented engineers. Perhaps, as a result of the decision to re-engineer from the ground up, the project lags behind ArgoCD in terms of features and maturity. For example, whilst ArgoCD has a number of different mechanisms to manage multi-tenancy environments, this is something that’s still evolving for Flux.
Other perceived differences are there by design. ArgoCD can ‘read’ Kubernetes configuration from a wide number of tools, whereas the Flux project has deliberately chosen to limit the scope to Kustomize and Helm charts. The rationale is that the hydration of configuration into consumable manifests is not the task per se of the GitOps function, but one that can be triggered as part of a CI pipeline before being consumed into the GitOps workflow. It’s fair to say that different people will have different opinions on this approach, but it is characteristic of the confidence the maintainers have in the engineering decisions they have made in the re-write of Flux.
If you’re a Flux user, it would be great to hear from you! Did you swap ArgoCD for Flux or the other way around? Let us know how you think these solutions stack up against each other.
These Stories on Tech
A look into the future of cloud native with Giant Swarm.
A look into the future of cloud native with Giant Swarm.
A Technical Product Owner explores Kubernetes Gateway API from a Giant Swarm perspective.
We empower platform teams to provide internal developer platforms that fuel innovation and fast-paced growth.
GET IN TOUCH
General: hello@giantswarm.io
CERTIFIED SERVICE PROVIDER
No Comments Yet
Let us know what you think