Kubernetes is becoming the de-facto standard for managing and orchestrating the containers. Its power of managing and deploying cloud-native application comes with the risk of security misconfiguration. Kubernetes is a complex piece of software and it is very easy to make mistakes and expose the cluster to various security risks. In this blog, we will look into various techniques to attack the Kubernetes cluster along with the mitigations, hardening of the cluster and security best practices. This blog is intended towards the penetration testers and security consultants who are assigned the task to audit and pentest the Kubernetes cluster configuration along with the workloads running inside the cluster.

NOTE: I have made this blog concise so that it can be used as the checklist. The detailed steps for exploitation will be covered in a separate blog.

This blog is divided into 2 parts: Hardening/Defending and Attacking the cluster. Hardening will cover the security best practices. Attacking will cover the offensive side and include the steps to perform penetration testing. More details related to the attacking the cluster will be covered in the separate blog (if required). As a penetration tester, go through all the points while auditing the cluster.

Before starting the audit of the Kubernetes cluster, it is important to know how the cluster is setup. There are 3 primary ways to run a Kubernetes cluster

  1. Kubernetes as a Service: e.g. EKS, GKE, AKS, Digital Ocean etc.
  2. Setting up own cluster using tools like kops, kubeadm, kubicorn, kubespray, cluster-api
  3. Setting up own cluster manually (the hard way)

If we are using Kubernetes as a Service (KaaS), we don’t have the access to the control plane of the cluster which consists of kube-apiserver, kube-controller, kube-scheduler and ETCD. These are managed by the service provider (e.g. AWS, GCP, Azure) which we hope are already hardened and following the security best practices. Please ignore the hardening related test cases from the checklist if you are auditing the cluster which has been deployed using KaaS.

Setting up the cluster manually is very time consuming and hence error-prone, so it is not recommended. Therefore today almost all the clusters which are created today uses option 2. These tools come with many default secure configurations but still, it is worthwhile to audit them because there might be the scenario where the default configuration has been changed to make certain things work, thereby introducing security vulnerabilities.


  • Run kube-bench to checks whether Kubernetes is deployed according to security best practices as defined in the CIS Kubernetes Benchmark.
  • Scan the YAML file using static analysis tool like kubesec.
  • Run Lynis to check for the hardening of all the master and worker nodes.
  • Enable AppArmor or SELinux on the underlying master and worker nodes so that secure PodSecurityPolicy can be implemented.
  • Use private key-based SSH access to master and worker nodes.
  • Scan the worker nodes (even master if pods are scheduled on them) regularly to ensure that the pods don’t persist any malicious data (malware, virus) on the host.
  • Only API Server should be exposed outside the private network/VPC. no other Kubernetes components should be exposed outside the private network.
  • API server should be placed behind a Firewall.
  • Rotate the certificates used in the control plane. Use solutions like cert-manager, Vault.
  • Use TLS bootstrapping in case autoscaling is used for worker nodes. Verify the flag –rotate-certificate in Kubelet configuration.
  • Use Network Policy to restrict pod-pod and pod-internet traffic.
  • For the clusters running in the cloud, restrict network access to cloud metadata. E.g (Azure, AWS) and metadata.google.internal (GCP)
  • Use Service Mesh (e.g. Istio, Linkerd) for mutual TLS and authorization between microservices.
  • Service Accounts should follow the principle of least privilege. Check the privilege of service accounts using rakkess.
  • If the privileged service accounts are required, disable the automount of the service account in all the pods using automountServiceAccountToken: false in pod specification file.
  • Enable NodeRestriction admission controller to prevent kubelet from accessing resources on the other nodes in the cluster.
  • To prevent denial of service attack, enable LimitRanger and ResourceQuota admission controller.
  • Use NodeSelector or Affinity options in the pod specification to schedule sensitive pods on selected nodes.
  • Images running inside the pods should be scanned prior to storing them in the repository. Use tools like Trivy, Twist Lock, Qualy, Sysdig, Stackrox, Blackduck, DockerHub etc.
  • In each pod specification file, set imagePullPolicy: Always to force the container run the current/latest image. This option will require pulling new images upon each execution.
  • Always pull the image from the secure and registered registry. Use the option imagePullSecrets in the pod specification.
  • Pull the image via hash rather than using the tag. E.g. docker-repository/repository-name/image-name@sha:hash. Avoid using latest tag.
  • Minimize the container image attack surface by removing the utilities like wget, curl, cat, more, less, sh, bash, ssh etc.
  • Use volume mounts to pass the secret to the containers rather than environment variables or hardcoding.
  • Have proper PodSecurityPolicy in place. Use securityContext in pod specification, in case PodSecurityPolicy is not enabled.


  • Run kube-hunter. Look for any CVEs associated with the current Kubernetes version.
  • Use nmap to port scan the master and worker nodes to look for any exposed Kubernetes services. Below are the default ports for the Kubernetes components:
    • Kubelet: 10250
    • Kube-Scheduler: 10251
    • Kube-Controller-Manager: 10252
    • ETCD: 2379
  • Check if pods are running as root.
  • Check for the privilege of service account token mounted to each pod at the location /var/run/secrets/kubernetes.io/serviceaccount/token. By default, the service account doesn’t have any privilege. Use rakkess to view the privileges associated with the particular service account.
  • Check if Kubelet is accessible on port 10250 on the worker node. If yes, verify if anonymous access is enabled. Verify the flag –anonymous-auth on Kubelet.
  • Look for secrets stored in plain text in the pod specifications and checked out to the version control (e.g. Github, Gitlab). To store the secrets, use solutions like gitlab secrets, kubesec, vault, etc.
  • Try scheduling pods with hostPath volume mount to gain privilege escalation.
  • Try scheduling the pods on the master nodes. Taints should be present on the master nodes so that the pods are not scheduled on them.
  • Look for privileged service accounts with permission to get/list/watch secrets. Use rakkess.
  • Look for service accounts with impersonate privileges. This may lead to unintended behaviour and privilege escalation. Use rakkess.

This might not be the exhaustive checklist for pentesing and auditing the Kubernetes cluster, but I have tried to provide the most common security pitfalls in Kubernetes. I will be updating this checklist regularly, so make sure you bookmark this page.

I am planning to add a few detailed steps by step guide of pwning the Kubernetes cluster. Please subscribe to the mailing list (on the right sidebar) to get an update on my new post. I hope this article was informative. Feel free to provide your comments and feedback below.

Happy Learning 🙂

The author is a security enthusiast with interest in web application security, cloud-native application development and Kubernetes.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.