• Jul 31, 2023
Arm processors have emerged as a disruptive force in the computer industry, making significant inroads in the mobile industry and now making its mark on PCs and data centers. With an excellent performance per watt ratio and cheap prices, they present an excellent choice for sustainability and efficiency.
Initially developed as a research project at Acorn Computers in 1985, ARM, which stands for Advanced RISC Machines, was developed with the goal of creating a more efficient and cost-effective processor for computers. Over time, Arm processors have expanded their presence into various domains, including mobile devices, consumer electronics, automotive systems, and recently high-performance computing like cloud environments.
Key features and characteristics of Arm processors include:
ARM processors offer several key advantages that make them highly desirable in various computing environments:
Energy efficiency: Arm processors are known for their excellent power efficiency, making them ideal for portable devices with limited battery life. This efficiency is achieved through optimized instruction execution.
Arm processors use less power.
Scalability: Arm processors come in a wide range of configurations, from simple low-power designs for embedded systems to high-performance cores for servers and supercomputers. This scalability allows Arm to adapt to diverse application requirements, including high-performance data center machines, which are, by the way, especially relevant when talking about Kubernetes in a production environment.
Arm processors have enough power.
Customization: The licensing model has led to a diverse ecosystem of Arm-based chips from companies like Qualcomm, Apple, Samsung, and NVIDIA. Every company can build a chip that is perfectly fitted for its needs. This, together with the simple architecture, can reduce overall costs.
Arm processors are cheap and customizable.
You have probably already read some articles about creating Kubernetes clusters with Raspberry Pi (cheap and small hobbyist Arm computers) using software like K3s or MicroK8s. While those setups are not very difficult and can even be fun, we’re not talking about enthusiast-level Kubernetes, we’re talking about the production-grade clusters, meant to run complex applications with high availability and scalability.
This is Kubernetes for some people:
But Kubernetes for us is not just Kubernetes…
There are a lot of other applications running in the cluster to enable observability, security, networking, operability, or scale. All these applications need to have an Arm version too in order to be able to get a production-grade Kubernetes cluster.
If you are thinking of migrating your Kubernetes clusters to Arm, you first need to do a list of all the applications you need to actually make your software work.
You will probably need:
The different maturity of all the apps in the ecosystem makes it very difficult to make assumptions on the portability to Arm of every one of these applications. You will need to do the research on the ones you are interested in.
Once you have decided to go Arm in your Kubernetes clusters, there are several strategies you can follow for your node architecture that provide different pros and cons. Let’s review the most common examples.
In this solution, we are leaving the control plane nodes in x86 nodes, creating Arm node pools to act as worker nodes. Let’s explore the advantages and considerations of this approach.
What’s good about this approach:
This approach can present issues though:
This architecture is particularly useful if you want to try Arm nodes without modifying your existing cluster creation process. This way you can try your apps and then have data to back your architecture decision. You need to be sure though that all the apps (developed by you or off-the-shelf) that run in the worker nodes have an Arm version.
Now we consider using only Arm nodes in the cluster. This is perfect if you’re starting your Kubernetes journey and want to get all the advantages of Arm.
The main advantages of this approach are:
But as always, this comes with some drawbacks:
The perfect situation for adopting this solution is when:
This is the best topology in terms of cost and performance efficiency, so if the drawbacks are not stopping you, this is the way to go.
The last example is a mixed architecture where we use different node pools, mixing x86 and Arm nodes. While this approach adds complexity in terms of operation and implementation, it addresses some of the blockers present in the other options.
What’s good about this approach:
Obviously, the more complex architecture comes with issues, who would have thought that with complexity would come complications!?
This topology is suitable when there is a strong commitment to use Arm for efficiency, but you need to use some apps without an Arm version. This provides a way to work around the limitations.
There’s a lot of content about compiling for Arm so this topic can be an article by itself, so let’s focus on the things that can be important working in Kubernetes. We’ll consider that if you are thinking about running your applications in Arm it’s because you are capable of compiling your code for that platform.
As a reference, here’s a non-exhaustive list of things you’ll need to take care of in order to generate an Arm (or multi-arch) image:
Once this is done, there are three main steps to prepare for Arm Kubernetes clusters.
Build the image
Can you cross-compile for Arm?
Can you build multi-arch images?
There are several tools, like docker buildX that help to create multi-arch images without the need of having dedicated machines. There are several toolchains to cross-build images.
This step shouldn’t be very problematic, just find the tool that fits your needs.
Test the image
Do you have Arm test environments?
Can you perform tests in different architectures?
You’ll need to have specific pipelines in place to test your software in all the architectures. This will increase the cost and complexity of those pipelines.
If you use a self-hosted solution for your CI, you’ll need to create machines or emulated environments that allow you to test Arm images. Depending on your solution, the degree of difficulty might vary. If you are using SaaS solutions for your CI, your provider might support it (i.e. CircleCI). There are also third-party solutions for providers like GitHub actions. Anyway, you are always dependent on the support of the CI provider, which is not always desirable.
Deploy the image
Can you deploy in Arm?
Are your manifests ready?
Do you know where to deploy every application?
Depending on the topology, deploying can have different levels of complexity, this could require some work to adapt how you deploy software to Kubernetes.
For example, if you are using a mixed topology with Arm and x86 nodes, deploying a DaemonSet can be very challenging. You’ll need to be sure that the images you set are multi-arch and that the node will be able to select the right version for the image. In addition, you’ll need to adjust limits and requests uniquely for all the pods. If the application has a different performance in both architectures you’ll need to set the most conservative ones for both, which means wasting resources.
Once we have the image, another important part of our toolchain to have in mind is the container registry. Using Arm images brings some requirements to the registry that must be contemplated:
Using modern versions of the runtime and the registry should be enough to avoid most of these issues. The architecture is a parameter of OCI manifest since version 1.0.0, but that doesn’t mean everything is sorted out. As a reference, containerd supports multiarch since version 1.5 but it had different issues with some platforms as recently as early last year and Cri-o still has some issues with some architectures (like Apple silicon processors).
Using Arm processors for your workloads can provide important cost savings and improve power efficiency. However, implementing Kubernetes (production-grade quality) in Arm is not easy as it has a number of operational applications that need to run along with your developments that need to be ready for Arm. In addition, checking and adapting all software for Arm compatibility is not a trivial task. It can require a fair effort and maintaining duplicated pipelines can be costly and complex.
This adaptation can be painful, but efficiency in the long term is important, not only in terms of cost but also in environmental impact. Time is running out and there is no planet B! 🌎
Editor's note: For a deeper exploration of this subject, we highly recommend checking out our on-demand webinar by Carlos. It's a fantastic opportunity to delve deeper into the topic and gain valuable insights.
These Stories on Tutorial
Explore K8s on VMware Cloud Director and uncover how it bridges on-premises infrastructure with cloud native innovation with Giant Swarm.
External Secrets Operator (ESO) integrates external secrets services with Kubernetes, providing a convenient way to retrieve and inject secret data as Kubernetes Secret objects.
A Technical Product Owner explores Kubernetes Gateway API from a Giant Swarm perspective.
We empower platform teams to provide internal developer platforms that fuel innovation and fast-paced growth.
GET IN TOUCH
General: hello@giantswarm.io
CERTIFIED SERVICE PROVIDER
No Comments Yet
Let us know what you think