• May 27, 2020
An in-depth series on how to easily get centralized logging, better security, performance metrics, and authentication using a Kubernetes-based platform.
This is the third part of our tutorial for creating and deploying an application with the cloud-native stack. Part 1 discussed how to set up a development environment, while Part 2 discussed building the actual application.
In this entry, we want to address the deployment problem: how to package and prepare our application for deployment on our platform.
In this part, we’ll prepare a Helm chart for our application. We could do the same thing with just Kubernetes manifests and the “kubectl” command, but this way we’re able to deploy all components of our application with a single Helm command. We can also make the Helm chart available online, so it’s easier for others to find and install it. If you’re not interested in the deployment setup details and don’t want to deploy the application step-by-step, don’t worry; both Docker images and the Helm chart are available publicly.
BTW, if you haven’t followed Part 1, please make sure to configure your helm repositories by running:
$ helm repo add giantswarm https://giantswarm.github.io/giantswarm-catalog
$ helm repo add giantswarm-playground https://giantswarm.github.io/giantswarm-playground-catalog
$ helm repo update
To install the ready Helm chart, just run:
helm install --name gs-todo --namespace todo --version 0.2.7 giantswarm-playground/giantswarm-todo-app
Then, you can continue to the part about operations and benefits.
When preparing the deployment manifests for our applications, we’ll use standard Kubernetes components introduced in Part 1: The Kubernetes Platform. To make it easier, we’ll pack them together as a Helm chart and add MySQL in the process as a dependency.
If you don’t have our tutorial’s repository checked out yet, execute the following steps:
git clone https://github.com/giantswarm/giantswarm-todo-app.git
cd giantswarm-todo-app
git checkout blog_v1'
Let’s have a look at what we want to create:
The Helm chart is just a set of Kubernetes resources, so we need to start with the Kubernetes basic building blocks. The diagram above shows the main components we have to create in the “todo” namespace. Basically, they include a “Deployment” for the “apiserver” and “todomanager” components of our microservice application and a “Service” for each, to act as a load balancer for accessing the pods.
The MySQL part will be provided by composing Helm charts. When creating a Helm chart, you can point to other Helm charts that act as a requirement for the chart you’re creating. For that purpose we will use the public MySQL chart and avoid writing this part on our own.
The graph above also shows some software deployed to “loki” and “linkerd” namespaces. These were already created for us when we deployed Loki and Linkerd with Helm in Part 1 of the tutorial. Still, here’s a good place to have a second look at what was deployed there.
The Loki chart deployed two things:
This runs the actual Loki server that stores the logs and responds to log queries. “Statefulset” is a Kubernetes tool similar to Deployment but meant to run stateful workloads.
This runs the Promtail tool, which is started on each host of the Kubernetes cluster (that’s what a DaemonSet does). This tool traces all logs written by pods running on the specific cluster node and forwards them to the Loki server for storage.
The Linkerd chart deployed multiple Linkerd services, let’s name just the ones most important for us:
This is actually a whole family of Deployments, but the important one is “linkerd-proxy-injector” (not shown directly on the picture). As you can imagine, for Linkerd to be able to trace every request that is directed to our services, it needs to use a proxy in front of the containers running our application. It does this by injecting dynamically a new container called “linkerd-proxy” into each of the pods deployed for our application. Since a Pod is a set of containers running together, this fits nicely into this Kubernetes unit of deployment. Inside the Pod, “linkerd-proxy” acts as a proxy for all incoming and outgoing traffic to other containers of the pod and forwards traffic information to other Linkerd services running in the namespace.
This is the hidden hero of our story. Prometheus is the metrics server that is now the de facto standard for collecting and analyzing metrics data in Kubernetes clusters. Linkerd uses it to fetch and store all of its monitoring metrics. Prometheus is now hidden in our tutorial, but we will come back to it later and give it more attention.
We already saw Grafana is our web UI for viewing data coming from two sources: the Prometheus server for metrics and Loki for logs.
The whole source code of the chart is available in the “helm/giantswarm-todo-app” directory. “Chart.yaml” describes the helm chart itself. The “requirements.yaml” lists additional sub-charts we want to use. In our case, it’s “mysql”, an external public chart we downloaded and placed in the “charts/” directory, which is meant for sub-charts.
The last top-level file is “values.yaml”, which contains the templating variables that can be changed by the Helm user. Our actual deployment manifests are placed in the “templates/” directory. I won’t be writing about every single detail in them, but let me point out a few interesting snippets.
A few files in the “templates/” directory have a more global character. The rest is either about the “apiserver” or “todomanager”. The global ones are:
Now, “apiserver” and “todomanager” are very similar, their basic building blocks are a Deployment and Service and they are placed in respective files with either a “apiserver_” or “todomanager_” prefix. Let’s point out a few things using “apiserver” as an example:
Now that all the pieces are in place we’re almost ready to deploy our application to the Kubernetes cluster. Unfortunately, we have to keep our default chart files so they are usable by our CI/CD pipeline, which releases public images and the chart as described in the short version. This makes them not usable directly by Helm. So, to make it work with the code we have by default in the github repository, we need to edit two files. To make it easier, the repository contains the files with necessary changes and you just need to replace them to be able to use our helm chart on your local minikube cluster.
Copy:
Finally, our Helm chart is ready and we can deploy it:
helm install --name gs-todo --namespace todo ./helm/giantswarm-todo-app
Let’s first list our Helm releases:
helm list
Let’s now check that everything is up with:
kubectl -n todo get pods
After a few minutes you should see all pods “Running” and “2/2” in the “Ready” column. As you might remember, we have just one container in each pod, but Linkerd injected another one, “linkerd-proxy”, so we get two containers in each Pod and the “2/2” Pod status.
In Part 4, we'll be ready to see our application and the cloud-native stack in action! Stay tuned.
When running Kubernetes clusters in production, you will realize that you will need more, sometimes much more, than just one cluster. You will need to care not only about the deployment but also upgrades, security issues, and monitoring.
That’s where Giant Swarm can help — we offer managed Kubernetes clusters so that you don’t have to worry about managing them yourself. We also offer managed applications — well-known cloud-native projects that we can run for you on top of your Kubernetes cluster, creating a fully managed cloud-native platform.
These Stories on Tech
A look into the future of cloud native with Giant Swarm.
A look into the future of cloud native with Giant Swarm.
A Technical Product Owner explores Kubernetes Gateway API from a Giant Swarm perspective.
We empower platform teams to provide internal developer platforms that fuel innovation and fast-paced growth.
GET IN TOUCH
General: hello@giantswarm.io
CERTIFIED SERVICE PROVIDER
No Comments Yet
Let us know what you think