• Sep 14, 2022
At Giant Swarm, Kubernetes is central to all that we do. That means that we care very much about the content, quality, and expressiveness of the multitude of APIs that Kubernetes offers to get things done. We have a vested interest, and so we actively participate in the community in order to help improve the core APIs. And, when we need to, we also extend Kubernetes with our own APIs.
One API that we and our customers use a lot is the Ingress API. It facilitates the routing of external traffic to the services running in-cluster, according to rules defined in Ingress
resource definitions. Unfortunately, whilst it's one of the oldest APIs exposed by Kubernetes (extensions/v1beta1 first appeared in September 2015), it has been widely perceived by the community to be limited in terms of its utility.
The Ingress API was deliberately designed to be terse, so as to enable it to be easily implemented by third-party ingress controllers and public cloud providers. As a result, it's changed little over the years and only graduated from its beta
status in 2020. But, this simplicity comes with a cost; it has caused a lot of frustration for cluster administrators wanting to use the more advanced features of proxy engines, upon which Ingress controllers are built (e.g NGINX). For example, there are no inherent means in the API for defining a rewrite of the path of an HTTP request, which is a fairly basic and common requirement.
In order to circumvent the lack of expression in the API, ingress controller implementors turned to Kubernetes annotations as an informal mechanism for extending the API. For the popular, community-provided Nginx Ingress Controller, the following annotation rewrites the request's path to the root path, before the request is routed to a backend service in the cluster:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
The use of a single annotation is all well and good in the simple scenario shown here. But, the effects of annotation usage can soon add up. There are over 100 different possible annotations for the NGINX Ingress Controller, for example. This makes writing and maintaining Ingress resource definitions, unwieldy and prone to error. Then, other ingress controllers, such as the Contour Ingress Controller, use a completely different set of annotations. This renders Ingress resource definitions non-portable between different Ingress controller types. Further, some ingress controllers even bypass the Ingress API altogether, preferring to use their own custom resources types as a means of providing access to the features of the underlying proxy software.
These issues, and others besides, render the whole Ingress experience in Kubernetes, less than optimal. It’s for this reason, that the Kubernetes Network Special Interest Group launched the Gateway API project in 2019, to engineer a richer alternative to the limited Ingress API.
Whilst the Ingress API may be limited in its expression, its use over a long period of time has enabled the Kubernetes community to coalesce on a set of use cases and requirements. This has informed the development of the Gateway API, and the project is much better placed to deliver an API that serves the needs of apps running atop Kubernetes.
The Gateway API has a number of noble aims that will significantly improve the task of shepherding external traffic to the backend services running in a Kubernetes cluster. Here are a few highlights.
Firstly, it lends itself to role-based delineation of administrative responsibilities. For example, it decouples the definition of traffic routes (HTTPRoute) from the abstract definition of the infrastructure or software (Gateway) that handles the routing activity. Cluster administrators can create Gateway resources that define logical network endpoints bound to the Gateway’s IP address(es). These ‘listeners’, which comprise hostname, port number, and protocol, constrain the type of routes that can be associated with a Gateway. With these constraints in place, application developers can then define routes that are capable of being attached to a Gateway, for their specific application scenarios.
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
name: contour
namespace: contour
spec:
gatewayClassName: contour
listeners:
- allowedRoutes:
namespaces:
from: All
name: http
port: 80
protocol: HTTP
In the definition of the Gateway above, a single listener has been defined, allowing route definitions from any namespace that references the Gateway. The listener handles HTTP traffic on port 80.
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
name: giantswarm-myapp-route
namespace: default
spec:
hostnames:
- "giantswarm.io"
parentRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: contour
namespace: contour
rules:
- backendRefs:
- name: myapp-svc
port: 80
The above HTTPRoute definition references the Gateway as its parent. HTTP traffic incoming on port 80, with a Host header equating to giantswarm.io
, is routed by the Gateway to a Kubernetes backend service called myapp-svc
. Future enhancements will help make this decoupling even more granular, by allowing the creation of parent/child route relationships, using inclusion
to embed route configuration snippets from child into parent. This caters to delegation scenarios, and provides the opportunity for route composition from more than one source.
The Ingress API is limited to defining traffic at layer 7 only, specifically HTTP/S traffic. Whilst this may cater to the vast majority of use cases, there will always be a requirement for handling TCP streams, and even UDP streams, too. The proxy software solutions that define ingress controller variants are inherently capable of handling pure layer 4 traffic, but of course, this option is not directly exposed through the Ingress API. Whilst an experimental feature of the Gateway API at present, TCPRoute and UDPRoute resources will become first-class constituent components of the API in due course.
Other than through the kludge that is ingress annotation (over)use, extending the Ingress API beyond its vanilla expression, is not possible. Perhaps with this limitation in mind, the Gateway API has been designed to be extendable. For example, if the existing and planned route types (HTTPRoute
, TCPRoute
, UDPRoute, GRPCRoute
) don’t satisfy a requirement, then custom routes can be implemented as required. As it’s early days with the Gateway API, the mechanics of the extension points aren’t overly mature.
These are just a few of the features that the API provides, but there are a whole lot more, including TLS configuration for routes, traffic splitting by weight and/or HTTP headers, HTTP request rewrites, and redirects, and route conflict resolution.
The Gateway API has been in the works since November 2019, and on 12 July 2022, the project released v0.5.0. This release saw the graduation of some of the key APIs (GatewayClass
, Gateway,
and HTTPRoute
) from alpha to beta status. It also saw the establishment of standard
and experimental
release channels, with the former containing only beta-level APIs.
Clearly, there’s still a long way to go before the Gateway API reaches a level of maturity and stability, that will allow organizations to rely on its use in production scenarios. That said, the Ingress API remained at beta-level for 5 years before graduation to 'stable', and was routinely used in production settings by organizations, large and small!
Meanwhile, progress on the Gateway API is being keenly watched, and there are already a number of early implementations of the API. But, what’s of particular interest to us at Giant Swarm, is a recent announcement by Matt Klein, the creator of the popular Envoy proxy. In a blog article, he introduces a new open-source project called the Envoy Gateway, which aims "to become the de facto standard for Kubernetes ingress supporting the Gateway API".
As you might expect, the Envoy Gateway will be based on the Envoy proxy, and will borrow ideas from the Contour and Emissary ingress projects from VMWare and Ambassador Labs, respectively. The notion is that the community should benefit from a single Envoy-based implementation of the Gateway API, rather than having to choose between a number of competing solutions. It also aims to abstract away the complexities of the Envoy proxy, but leave open the possibility of exposing additional proxy features, as allowed by the extension capabilities of the Gateway API.
In conjunction with the Gateway API, the Envoy Gateway project is a compelling development in the Kubernetes ingress story, and one that Giant Swarm is following very closely.
It has been an age since the introduction of the Ingress API, which sparked the extensive discussion that ensued concerning its perceived limitations. With the introduction of v0.5.0 of the Gateway API, it finally feels as if the community is moving towards a networking solution that is fit for purpose. It’s probably not a coincidence that the Envoy Gateway project has emerged at this juncture, too, but it’s not the only new initiative in town. The Gateway API for Mesh Management and Administration (GAMMA) initiative has recently emerged as a Gateway API subproject, to represent the interests of the service mesh community. This means the Gateway API may eventually figure in the routing of east-west inter-service traffic in Kubernetes clusters, as well as the north-south traffic use case associated with ingress.
As the Gateway API continues to mature, Giant Swarm will be evaluating each of the solutions that seek to implement the API. We'll be tracking their progress to see which of the solutions satisfies our needs, and the needs of our customers, whilst ensuring the long-term viability of the solution. We'd love to hear the thoughts of anyone in the community who is an early adopter of a Gateway API implementation. Let us know your thoughts!
These Stories on Tutorial
Explore K8s on VMware Cloud Director and uncover how it bridges on-premises infrastructure with cloud native innovation with Giant Swarm.
Unleash Arm's potential with Giant Swarm's guide — Empowering Kubernetes on ARM.
External Secrets Operator (ESO) integrates external secrets services with Kubernetes, providing a convenient way to retrieve and inject secret data as Kubernetes Secret objects.
We empower platform teams to provide internal developer platforms that fuel innovation and fast-paced growth.
GET IN TOUCH
General: hello@giantswarm.io
CERTIFIED SERVICE PROVIDER
No Comments Yet
Let us know what you think