Why Is Securing Kubernetes so Difficult?

by Puja Abbassi on Oct 25, 2018

<span id="hs_cos_wrapper_name" class="hs_cos_wrapper hs_cos_wrapper_meta_field hs_cos_wrapper_type_text" style="" data-hs-cos-general-type="meta_field" data-hs-cos-type="text" >Why Is Securing Kubernetes so Difficult?</span>

If you’re already familiar with Kubernetes, the question in the title will probably resonate deep within your very being. And if you’re only just getting started on your cloud native journey, and Kubernetes represents a looming mountain to conquer, you’ll quickly come to realise the pertinence of the question.

Security is hard at the best of times, but when your software applications are composed of a multitude of small, dynamic, scalable, distributed microservices running in containers, then it gets even harder. And it’s not just the ephemerality of the environment that ratchets up the difficulties, it’s also the adoption of new workflows and toolchains, all of which bring their own new security considerations.

Let’s dig a little deeper.


Skills


Firstly, Kubernetes, and some of the other tools that are used in the delivery pipeline of containerised microservices, are complex. They have a steep learning curve, are subject to an aggressive release cycle with frequent change, and require considerable effort to keep on top of all of the nuanced aspects of platform security. If the team members responsible for security don’t understand how the platform should be secured, or worse, nobody has been assigned the responsibility, then its conceivable that glaring security holes could exist in the platform. At the very best, this could prove embarrassing, and at worst, could have pernicious consequences.


Focus


In the quest to become innovators in their chosen market, or to be nimbler in response to external market forces (for example, customer demands or competitor activity), organizations new and old, small and large, are busy adopting DevOps practices. The focus is on the speed of delivery of new features and fixes, with the blurring of the traditional lines between development and operations. It’s great that we consider the operational aspects as we develop and define our integration, testing and delivery pipeline, but what about security? Security shouldn’t be an afterthought; it needs to be an integral part of the software development lifecycle, considered at every step in the process. Sadly, this is often not the case, but there is a growing recognition of the need for security to ‘shift left’, and to be accommodated in the delivery pipeline. The practice is coined DevSecOps or continuous security.


Complexity


We’ve already alluded to the fact that Kubernetes and its symbiotic tooling, is complex in nature, and somewhat difficult to master. But it gets worse, because there are multiple layers in the Kubernetes stack, each of which has its own security considerations. It’s simply not enough to lock down one layer, whilst ignoring the other layers that make up the stack. This would be a bit like locking the door, whilst leaving the windows wide open.

Having to consider and implement multiple layers of security, introduces more complexity. But it also has a beneficial side effect; it provides ‘defence in depth’, such that if one security mechanism is circumvented by a would-be attacker, another mechanism in the same or another layer, can intervene and render the attack ineffective.


Securing All the Layers


What are the layers, then, that need to be secured in a Kubernetes platform?

First, there is an infrastructure layer, that comprises of machines and the networking connections between them. The machines may consist of physical or abstracted hardware components, and will run an operating system and (usually) the Docker Engine.

Second, there is further infrastructure layer that is composed of the Kubernetes cluster components; the control plane components running on the master node(s), and the components that interact with container workloads running on worker nodes.

The next layer deals with applying various security controls to Kubernetes, in order to control access to and from within the cluster, define policy for running container workloads, and for providing workload isolation.

Finally, a workload security layer deals with the security, provenance, and integrity of the container workloads, themselves. This security layer should not only deal with the tools that help to manage the security of the workloads, but should also address how those tools are incorporated into the end-to-end workflow.


Some Common Themes


It’s useful to know that there are some common security themes that run through most of the layers that need our consideration. Recognizing them in advance, and taking a consistent approach in their application, can help significantly in implementing security policy.

  • Principle of Least Privilege - a commonly applied principle in wider IT security, its concern is with limiting the access users and application services have to available resources, such that the access provided is just sufficient to perform the assigned function. This helps to prevent privilege escalation; if a container workload is compromised, for example, and it’s been deployed with just enough privileges for it to perform its task, the fallout from the compromise is limited to the privileges assigned to the workload.
  • Software Currency - keeping software up to date is crucial in the quest to keep platforms secure. It goes without saying that security-related patches should be applied as soon as is practically possible, and other software components should be exercised thoroughly in a test environment before being applied to production services. Some care needs to be taken when deploying brand new major releases (e.g. 2.0), and as a general rule, it’s not wise to deploy alpha, beta, or release candidate versions to production environments. Interestingly, this doesn’t necessarily hold true for API versions associated with Kubernetes objects. The API for the commonly used Ingress object, for example, has been at version v1beta1 since Kubernetes v1.1.
  • Logging & Auditing - having the ability to check back to see what or who instigated a particular action or chain of actions, is extremely valuable in maintaining the security of a platform. The logging of audit events should be configured in all layers of the platform stack, including the auditing of Kubernetes cluster activity using audit policy. Audit logs should be collected and shipped to a central repository using a log shipper, such as Fluentd or Filebeat, where they can be stored for subsequent analysis using a tool such as Elasticsearch, or a public cloud equivalent.
  • Security vs. Productivity Trade Off - in some circumstances, security might be considered a hindrance to productivity; in particular, developer productivity. The risk associated with allowing the execution of privileged containers, for example, in a cluster dedicated to development activities, might be a palatable one, if it allows a development team to move at a faster pace. The trade off between security and productivity (and other factors) will be different for a development environment, a production environment, and even a playground environment used for learning and trying things out. What’s unacceptable in one environment, may be perfectly acceptable in another. The risk associated with relaxing security constraints should not simply be disregarded, however; it should be carefully calculated, and wherever possible, be mitigated. The use of privileged containers for example, can be mitigated using Kubernetes-native security controls, such as RBAC and Pod Security Policies.

Series Outline


Configuring security on a Kubernetes platform is difficult, but not impossible! This is an introductory article in a series entitled Securing Kubernetes for Cloud Native Applications, which aims to lift the lid on aspects of security for a Kubernetes platform. We can’t cover every topic and every facet, but we’ll aim to provide a good overview of the security requirements in each layer, as well as some insights from our experience of running production-grade Kubernetes clusters for our customers.

The series of articles is comprised of the following: