Illumina Innovates with Rancher and Kubernetes
Click here to download a PDF version of this document
The following document scores a Kubernetes 1.15.x RKE cluster provisioned according to the Rancher v2.3.x hardening guide against the CIS 1.4.1 Kubernetes benchmark.
The CIS Benchmark version v1.4.1 covers the security posture of Kubernetes 1.13 clusters. This self-assessment has been run against Kubernetes 1.15, using the guidelines outlined in the CIS v1.4.1 benchmark. Updates to the CIS benchmarks will be applied to this document as they are released.
This document is a companion to the Rancher v2.3.x security hardening guide. The hardening guide provides prescriptive guidance for hardening a production installation of Rancher, and this benchmark guide is meant to help you evaluate the level of security of the hardened cluster against each control in the benchmark.
Because Rancher and RKE install Kubernetes services as Docker containers, many of the control verification checks in the CIS Kubernetes Benchmark don’t apply. This guide will walk through the various controls and provide updated example commands to audit compliance in Rancher-created clusters.
This document is to be used by Rancher operators, security teams, auditors and decision makers.
For more detail about each audit, including rationales and remediations for failing tests, you can refer to the corresponding section of the CIS Kubernetes Benchmark v1.4.1. You can download the benchmark after logging in to CISecurity.org.
Rancher and RKE install Kubernetes services via Docker containers. Configuration is defined by arguments passed to the container at the time of initialization, not via configuration files.
Scoring the commands is different in Rancher Labs than in the CIS Benchmark. Where the commands differ from the original CIS benchmark, the commands specific to Rancher Labs are provided for testing.
When performing the tests, you will need access to the Docker command line on the hosts of all three RKE roles. The commands also make use of the the jq command to provide human-readable formatting.
jq
The following scored controls do not currently pass, and Rancher Labs is working towards addressing these through future enhancements to the product.
--kubelet-certificate-authority
--hostname-override
--anonymous-auth
false
Audit
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--anonymous-auth=false").string'
Returned Value: --anonymous-auth=false
--anonymous-auth=false
Result: Pass
--basic-auth-file
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--basic-auth-file=.*").string'
Returned Value: null
null
--insecure-allow-any-token
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--insecure-allow-any-token").string'
--kubelet-https
true
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--kubelet-https=false").string'
--insecure-bind-address
Notes
Flag not set or --insecure-bind-address=127.0.0.1. RKE sets this flag to --insecure-bind-address=127.0.0.1
--insecure-bind-address=127.0.0.1
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--insecure-bind-address=(?:(?!127\\.0\\.0\\.1).)+")'
--insecure-port argument
0
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--insecure-port=0").string'
Returned Value: --insecure-port=0
--insecure-port=0
--secure-port
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--secure-port=6443").string'
Returned Value: --secure-port=6443
--secure-port=6443
--profiling
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--profiling=false").string'
Returned Value: --profiling=false
--profiling=false
--repair-malformed-updates
Note: This deprecated flag was removed in 1.14, so it cannot be set.
AlwaysAdmit
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--enable-admission-plugins=.*(AlwaysAdmit).*").captures[].string'
AlwaysPullImages
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--enable-admission-plugins=.*(AlwaysPullImages).*").captures[].string'
Returned Value: AlwaysPullImages
DenyEscalatingExec
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--enable-admission-plugins=.*(DenyEscalatingExec).*").captures[].string'
Returned Value: DenyEscalatingExec
SecurityContextDeny
This SHOULD NOT be set if you are using a PodSecurityPolicy (PSP). From the CIS Benchmark document:
PodSecurityPolicy
This admission controller should only be used where Pod Security Policies cannot be used on the cluster, as it can interact poorly with certain Pod Security Policies
Several system services (such as nginx-ingress) utilize SecurityContext to switch users and assign capabilities. These exceptions to the general principle of not allowing privilege or capabilities can be managed with PSP.
nginx-ingress
SecurityContext
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--enable-admission-plugins=.*(SecurityContextDeny).*").captures[].string'
Result: Document
NamespaceLifecycle
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--enable-admission-plugins=.*(NamespaceLifecycle).*").captures[].string'
Returned Value: NamespaceLifecycle
--audit-log-path
This path is the path inside of the container. It’s combined with the RKE cluster.yml extra-binds: option to map the audit log to the host filesystem.
cluster.yml
extra-binds:
Audit logs should be collected and shipped off-system to guarantee their integrity.
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--audit-log-path=/var/log/kube-audit/audit-log.json").string'
Returned Value: --audit-log-log=/var/log/kube-audit/audit-log.json
--audit-log-log=/var/log/kube-audit/audit-log.json
--audit-log-maxage
30
Audit logs should be collected and shipped off-system to guarantee their integrity. Rancher Labs recommends setting this argument to a low value to prevent audit logs from filling the local disk.
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--audit-log-maxage=\\d+").string'
Returned Value: --audit-log-maxage=5
--audit-log-maxage=5
--audit-log-maxbackup
10
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--audit-log-maxbackup=\\d+").string'
Returned Value: --audit-log-maxbackup=5
--audit-log-maxbackup=5
--audit-log-maxsize
100
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--audit-log-maxsize=\\d+").string'
Returned Value: --audit-log-maxsize=100
--audit-log-maxsize=100
--authorization-mode
AlwaysAllow
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--authorization-mode=(Node|RBAC|,)+").string'
Returned Value: --authorization-mode=Node,RBAC
--authorization-mode=Node,RBAC
--token-auth-file
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--token-auth-file=.*").string'
RKE is using the kubelet’s ability to automatically create self-signed certs. No CA cert is saved to verify the communication between kube-apiserver and kubelet.
kube-apiserver
kubelet
Mitigation
Make sure nodes with role:controlplane are on the same local network as your nodes with role:worker. Use network ACLs to restrict connections to the kubelet port (10250/tcp) on worker nodes, only permitting it from controlplane nodes.
role:controlplane
role:worker
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--kubelet-certificate-authority=.*").string'
Returned Value: none
Result: Fail (See Mitigation)
--kubelet-client-certificate
--kubelet-client-key
Audit (--kubelet-client-certificate)
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--kubelet-client-certificate=.*").string'
Returned Value: --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem
--kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem
Audit (--kubelet-client-key)
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--kubelet-client-key=.*").string'
Returned Value: --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem
--kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem
--service-account-lookup
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--service-account-lookup=true").string'
Returned Value: --service-account-lookup=true
--service-account-lookup=true
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--enable-admission-plugins=.*(PodSecurityPolicy).*").captures[].string'
Returned Value: PodSecurityPolicy
--service-account-key-file
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--service-account-key-file=.*").string'
Returned Value: --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem
--service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem
--etcd-certfile
--etcd-keyfile
Audit (--etcd-certfile)
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--etcd-certfile=.*").string'
Returned Value: --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem
--etcd-certfile=/etc/kubernetes/ssl/kube-node.pem
Audit (--etcd-keyfile)
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--etcd-keyfile=.*").string'
Returned Value: --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem
--etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem
ServiceAccount
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--enable-admission-plugins=.*(ServiceAccount).*").captures[].string'
Returned Value: ServiceAccount
--tls-cert-file
--tls-private-key-file
Audit (--tls-cert-file)
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--tls-cert-file=.*").string'
Returned Value: --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem
--tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem
Audit (--tls-key-file)
--tls-key-file
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--tls-private-key-file=.*").string'
Returned Value: --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem
--tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem
--client-ca-file
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--client-ca-file=.*").string'
Returned Value: --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem Result: Pass
--client-ca-file=/etc/kubernetes/ssl/kube-ca.pem
Audit (Allowed Ciphers)
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--tls-cipher-suites=.*(TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256).*").captures[].string'
Returned Value: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--tls-cipher-suites=.*(TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256).*").captures[].string'
Returned Value: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--tls-cipher-suites=.*(TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305).*").captures[].string'
Returned Value: TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--tls-cipher-suites=.*(TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384).*").captures[].string'
Returned Value: TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--tls-cipher-suites=.*(TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305).*").captures[].string'
Returned Value: TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--tls-cipher-suites=.*(TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384).*").captures[].string'
Returned Value: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--tls-cipher-suites=.*(TLS_RSA_WITH_AES_256_GCM_SHA384).*").captures[].string'
Returned Value: TLS_RSA_WITH_AES_256_GCM_SHA384
TLS_RSA_WITH_AES_256_GCM_SHA384
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--tls-cipher-suites=.*(TLS_RSA_WITH_AES_128_GCM_SHA256).*").captures[].string'
Returned Value: TLS_RSA_WITH_AES_128_GCM_SHA256
TLS_RSA_WITH_AES_128_GCM_SHA256
Audit (Disallowed Ciphers)
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--tls-cipher-suites=.*(CBC).*").captures[].string'
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--tls-cipher-suites=.*(RC4).*").captures[].string'
--etcd-cafile
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--etcd-cafile=.*").string'
Returned Value: --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem
--etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem
Returned Value: --authorization-mode=Node,RBAC Result: Pass
NodeRestriction
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--enable-admission-plugins=.*(NodeRestriction).*").captures[].string'
Returned Value: NodeRestriction
--experimental-encryption-provider-config
Notes In Kubernetes 1.15.x this flag is --encryption-provider-config
--encryption-provider-config
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--encryption-provider-config=.*").string'
Returned Value: encryption-provider-config=/opt/kubernetes/encryption.yaml
encryption-provider-config=/opt/kubernetes/encryption.yaml
Only the first provider in the list is active.
grep -A 1 providers: /opt/kubernetes/encryption.yaml | grep aescbc
Returned Value: - aescbc:
- aescbc:
EventRateLimit
The EventRateLimit plugin requires setting the --admission-control-config-file option and configuring details in the following files:
--admission-control-config-file
/opt/kubernetes/admission.yaml
/opt/kubernetes/event.yaml
See Host Configuration for details.
Audit (Admissions plugin)
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--enable-admission-plugins=.*(EventRateLimit).*").captures[].string'
Returned Value: EventRateLimit
Audit (--admission-control-config-file)
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--admission-control-config-file=.*").string'
Returned Value: --admission-control-config-file=/opt/kubernetes/admission.yaml
--admission-control-config-file=/opt/kubernetes/admission.yaml
AdvancedAuditing=false should not be set, but --audit-policy-file should be set and configured. See Host Configuration for a sample audit policy file.
AdvancedAuditing=false
--audit-policy-file
Audit (Feature Gate)
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--feature-gates=.*(AdvancedAuditing=false).*").captures[].string'
Audit (Audit Policy File)
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--audit-policy-file=.*").string'
Returned Value: --audit-policy-file=/opt/kubernetes/audit.yaml
--audit-policy-file=/opt/kubernetes/audit.yaml
--request-timeout
RKE uses the default value of 60s and doesn’t set this option. Tuning this value is specific to the environment.
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--request-timeout=.*").string'
docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--authorization-mode=.*").string'
Returned Value: "--authorization-mode=Node,RBAC"
"--authorization-mode=Node,RBAC"
docker inspect kube-scheduler | jq -e '.[0].Args[] | match("--profiling=false").string'
Returned Value: --profiling=false Result: Pass
--address
docker inspect kube-scheduler | jq -e '.[0].Args[] | match("--address=127\\.0\\.0\\.1").string'
Returned Value: --address=127.0.0.1 Result: Pass
--address=127.0.0.1
--terminated-pod-gc-threshold
docker inspect kube-controller-manager | jq -e '.[0].Args[] | match("--terminated-pod-gc-threshold=\\d+").string'
Returned Value: --terminated-pod-gc-threshold=1000
--terminated-pod-gc-threshold=1000
docker inspect kube-controller-manager | jq -e '.[0].Args[] | match("--profiling=false").string'
--use-service-account-credentials
docker inspect kube-controller-manager | jq -e '.[0].Args[] | match("--use-service-account-credentials=true").string'
Returned Value: --use-service-account-credentials=true
--use-service-account-credentials=true
--service-account-private-key-file
docker inspect kube-controller-manager | jq -e '.[0].Args[] | match("--service-account-private-key-file=.*").string'
Returned Value: --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem
--service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem
--root-ca-file
docker inspect kube-controller-manager | jq -e '.[0].Args[] | match("--root-ca-file=.*").string'
Returned Value: --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem
--root-ca-file=/etc/kubernetes/ssl/kube-ca.pem
RKE does not yet support certificate rotation. This feature is due for the 0.1.12 release of RKE.
docker inspect kube-controller-manager | jq -e '.[0].Args[] | match("--feature-gates=.*(RotateKubeletServerCertificate=true).*").captures[].string'
Returned Value: RotateKubeletServerCertificate=true
RotateKubeletServerCertificate=true
docker inspect kube-controller-manager | jq -e '.[0].Args[] | match("--address=127\\.0\\.0\\.1").string'
Returned Value: --address=127.0.0.1
RKE doesn’t require or maintain a configuration file for kube-apiserver. All configuration is passed in as arguments at container run time.
Result: Pass (Not Applicable)
root:root
644
RKE doesn’t require or maintain a configuration file for kube-controller-manager. All configuration is passed in as arguments at container run time.
kube-controller-manager
RKE doesn’t require or maintain a configuration file for kube-scheduler. All configuration is passed in as arguments at container run time.
kube-scheduler
etcd
RKE doesn’t require or maintain a configuration file for etcd. All configuration is passed in as arguments at container run time.
This is a manual check.
Audit (/var/lib/cni/networks/k8s-pod-network)
/var/lib/cni/networks/k8s-pod-network
Note This may return a lockfile. Permissions on this file do not need to be as restrictive as the CNI files.
stat -c "%n - %a" /var/lib/cni/networks/k8s-pod-network/*
Returned Value:
/var/lib/cni/networks/k8s-pod-network/10.42.0.2 - 644 /var/lib/cni/networks/k8s-pod-network/10.42.0.3 - 644 /var/lib/cni/networks/k8s-pod-network/last_reserved_ip.0 - 644 /var/lib/cni/networks/k8s-pod-network/lock - 750
Audit (/etc/cni/net.d)
/etc/cni/net.d
stat -c "%n - %a" /etc/cni/net.d/*
/etc/cni/net.d/10-canal.conflist - 664 /etc/cni/net.d/calico-kubeconfig - 600
stat -c "%n - %U:%G" /var/lib/cni/networks/k8s-pod-network/*
/var/lib/cni/networks/k8s-pod-network/10.42.0.2 - root:root /var/lib/cni/networks/k8s-pod-network/10.42.0.3 - root:root /var/lib/cni/networks/k8s-pod-network/last_reserved_ip.0 - root:root /var/lib/cni/networks/k8s-pod-network/lock - root:root
stat -c "%n - %U:%G" /etc/cni/net.d/*
/etc/cni/net.d/10-canal.conflist - root:root /etc/cni/net.d/calico-kubeconfig - root:root
700
Files underneath the data dir have permissions set to 700
stat -c "%n - %a" /var/lib/rancher/etcd/* /var/lib/etcd/member - 700
stat -c %a /var/lib/rancher/etcd
Returned Value: 700
etcd:etcd
The etcd container runs as the etcd user. The data directory and files are owned by etcd.
stat -c %U:%G /var/lib/rancher/etcd
Returned Value: etcd:etcd
admin.conf
RKE does not store the kubernetes default kubeconfig credentials file on the nodes. It’s presented to user where RKE is run. We recommend that this kube_config_cluster.yml file be kept in secure store.
RKE does not store the default kubectl config credentials file on the nodes. It presents credentials to the user when rke is first run, and only on the device where the user ran the command. Rancher Labs recommends that this kube_config_cluster.yml file be kept in secure store.
kubectl
rke
kube_config_cluster.yml
scheduler.conf
stat -c %a /etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml
Returned Value: 644
stat -c %U:%G /etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml
Returned Value: root:root
controller-manager.conf
stat -c %a /etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml
stat -c %U:%G /etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml
ls -laR /etc/kubernetes/ssl/ |grep -v yaml
total 128 drwxr-xr-x 2 root root 4096 Jul 1 19:53 . drwxr-xr-x 4 root root 4096 Jul 1 19:53 .. -rw------- 1 root root 1679 Jul 1 19:53 kube-apiserver-key.pem -rw------- 1 root root 1679 Jul 1 19:53 kube-apiserver-proxy-client-key.pem -rw-r--r-- 1 root root 1107 Jul 1 19:53 kube-apiserver-proxy-client.pem -rw------- 1 root root 1675 Jul 1 19:53 kube-apiserver-requestheader-ca-key.pem -rw-r--r-- 1 root root 1082 Jul 1 19:53 kube-apiserver-requestheader-ca.pem -rw-r--r-- 1 root root 1285 Jul 1 19:53 kube-apiserver.pem -rw------- 1 root root 1675 Jul 1 19:53 kube-ca-key.pem -rw-r--r-- 1 root root 1017 Jul 1 19:53 kube-ca.pem -rw------- 1 root root 1679 Jul 1 19:53 kube-controller-manager-key.pem -rw-r--r-- 1 root root 1062 Jul 1 19:53 kube-controller-manager.pem -rw------- 1 root root 1675 Jul 1 19:53 kube-etcd-172-31-16-161-key.pem -rw-r--r-- 1 root root 1277 Jul 1 19:53 kube-etcd-172-31-16-161.pem -rw------- 1 root root 1679 Jul 1 19:53 kube-etcd-172-31-24-134-key.pem -rw-r--r-- 1 root root 1277 Jul 1 19:53 kube-etcd-172-31-24-134.pem -rw------- 1 root root 1675 Jul 1 19:53 kube-etcd-172-31-30-57-key.pem -rw-r--r-- 1 root root 1277 Jul 1 19:53 kube-etcd-172-31-30-57.pem -rw------- 1 root root 1679 Jul 1 19:53 kube-node-key.pem -rw-r--r-- 1 root root 1070 Jul 1 19:53 kube-node.pem -rw------- 1 root root 1679 Jul 1 19:53 kube-proxy-key.pem -rw-r--r-- 1 root root 1046 Jul 1 19:53 kube-proxy.pem -rw------- 1 root root 1679 Jul 1 19:53 kube-scheduler-key.pem -rw-r--r-- 1 root root 1050 Jul 1 19:53 kube-scheduler.pem -rw------- 1 root root 1679 Jul 1 19:53 kube-service-account-token-key.pem -rw-r--r-- 1 root root 1285 Jul 1 19:53 kube-service-account-token.pem
stat -c "%n - %a" /etc/kubernetes/ssl/*.pem |grep -v key
/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem - 644 /etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem - 644 /etc/kubernetes/ssl/kube-apiserver.pem - 644 /etc/kubernetes/ssl/kube-ca.pem - 644 /etc/kubernetes/ssl/kube-controller-manager.pem - 644 /etc/kubernetes/ssl/kube-etcd-172-31-16-161.pem - 644 /etc/kubernetes/ssl/kube-etcd-172-31-24-134.pem - 644 /etc/kubernetes/ssl/kube-etcd-172-31-30-57.pem - 644 /etc/kubernetes/ssl/kube-node.pem - 644 /etc/kubernetes/ssl/kube-proxy.pem - 644 /etc/kubernetes/ssl/kube-scheduler.pem - 644 /etc/kubernetes/ssl/kube-service-account-token.pem - 644
stat -c "%n - %a" /etc/kubernetes/ssl/*key*
/etc/kubernetes/ssl/kube-apiserver-key.pem - 600 /etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem - 600 /etc/kubernetes/ssl/kube-apiserver-requestheader-ca-key.pem - 600 /etc/kubernetes/ssl/kube-ca-key.pem - 600 /etc/kubernetes/ssl/kube-controller-manager-key.pem - 600 /etc/kubernetes/ssl/kube-etcd-172-31-16-161-key.pem - 600 /etc/kubernetes/ssl/kube-etcd-172-31-24-134-key.pem - 600 /etc/kubernetes/ssl/kube-etcd-172-31-30-57-key.pem - 600 /etc/kubernetes/ssl/kube-node-key.pem - 600 /etc/kubernetes/ssl/kube-proxy-key.pem - 600 /etc/kubernetes/ssl/kube-scheduler-key.pem - 600 /etc/kubernetes/ssl/kube-service-account-token-key.pem - 600
--cert-file
--key-file
Audit (--cert-file)
(--cert-file
docker inspect etcd | jq -e '.[0].Args[] | match("--cert-file=.*").string'
Note Certificate file name may vary slightly, since it contains the IP of the etcd container.
Returned Value: --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-24-134.pem
--cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-24-134.pem
Audit (--key-file)
docker inspect etcd | jq -e '.[0].Args[] | match("--key-file=.*").string'
Note Key file name may vary slightly, since it contains the IP of the etcd container.
Returned Value: --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-24-134-key.pem
--key-file=/etc/kubernetes/ssl/kube-etcd-172-31-24-134-key.pem
--client-cert-auth
Setting “–client-cert-auth” is the equivalent of setting “–client-cert-auth=true”.
docker inspect etcd | jq -e '.[0].Args[] | match("--client-cert-auth(=true)*").string'
Returned Value: --client-cert-auth
--auto-tls
docker inspect etcd | jq -e '.[0].Args[] | match("--auto-tls(?:(?!=false).*)").string'
--peer-cert-file
--peer-key-file
Audit (--peer-cert-file)
docker inspect etcd | jq -e '.[0].Args[] | match("--peer-cert-file=.*").string'
Returned Value: --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-22-135.pem
--peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-22-135.pem
Audit (--peer-key-file)
docker inspect etcd | jq -e '.[0].Args[] | match("--peer-key-file=.*").string'
Returned Value: --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-22-135-key.pem
--peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-22-135-key.pem
--peer-client-cert-auth
Setting --peer-client-cert-auth is the equivalent of setting --peer-client-cert-auth=true.
--peer-client-cert-auth=true
docker inspect etcd | jq -e '.[0].Args[] | match("--peer-client-cert-auth(=true)*").string'
Returned Value: --peer-client-cert-auth
--peer-auto-tls
docker inspect etcd | jq -e '.[0].Args[] | match("--peer-auto-tls(?:(?!=false).*)").string'
RKE supports connecting to an external etcd cluster. This external cluster could be configured with its own discreet CA.
--trusted-ca-file is set and different from the --client-ca-file used by kube-apiserver.
--trusted-ca-file
docker inspect etcd | jq -e '.[0].Args[] | match("--trusted-ca-file=(?:(?!/etc/kubernetes/ssl/kube-ca.pem).*)").string'
Result: Pass (See Mitigation)
These “Not Scored” controls are implementation best practices. To ease the administrative burden, we recommend that you implement these best practices on your workload clusters by creating clusters with Rancher rather than using RKE alone.
Rancher has built in support for maintaining and enforcing Kubernetes RBAC on your workload clusters.
Rancher has the ability integrate with external authentication sources (LDAP, SAML, AD…) allows easy access with unique credentials to your existing users or groups.
With Rancher, users or groups can be assigned access to all clusters, a single cluster or a “Project” (a group of one or more namespaces in a cluster). This allows granular access control to cluster resources.
Rancher can (optionally) automatically create Network Policies to isolate “Projects” (a group of one or more namespaces) in a cluster.
See “Cluster Options” when creating a cluster with Rancher to turn on Network Isolation.
seccomp
docker/default
Since this requires the enabling of AllAlpha feature gates we would not recommend enabling this feature at the moment.
This practice does go against control 1.1.13, but we prefer using a PodSecurityPolicy and allowing security context to be set over a blanket deny.
Rancher allows users to set various Security Context options when launching pods via the GUI interface.
ImagePolicyWebhook
Image Policy Webhook requires a 3rd party service to enforce policy. This can be configured in the --admission-control-config-file. See the Host configuration section for the admission.yaml file.
Rancher can (optionally) automatically create Network Policies to isolate projects (a group of one or more namespaces) within a cluster.
See the Cluster Options section when creating a cluster with Rancher to turn on network isolation.
Section 1.7 of this guide shows how to add and configure a default “restricted” PSP based on controls.
With Rancher you can create a centrally maintained “restricted” PSP and deploy it to all of the clusters that Rancher manages.
This RKE configuration has two Pod Security Policies.
default-psp
kube-system
ingress-nginx
cattle-system
restricted
The restricted PodSecurityPolicy is available to all ServiceAccounts.
kubectl get psp restricted -o jsonpath='{.spec.privileged}' | grep "true"
kubectl get psp restricted -o jsonpath='{.spec.hostPID}' | grep "true"
kubectl get psp restricted -o jsonpath='{.spec.hostIPC}' | grep "true"
kubectl get psp restricted -o jsonpath='{.spec.hostNetwork}' | grep "true"
allowPrivilegeEscalation
kubectl get psp restricted -o jsonpath='{.spec.allowPrivilegeEscalation}' | grep "true"
root
kubectl get psp restricted -o jsonpath='{.spec.runAsUser.rule}' | grep "RunAsAny"
kubectl get psp restricted -o jsonpath='{.spec.requiredDropCapabilities}' | grep "NET_RAW"
Returned Value: [NET_RAW]
[NET_RAW]
docker inspect kubelet | jq -e '.[0].Args[] | match("--anonymous-auth=false").string'
docker inspect kubelet | jq -e '.[0].Args[] | match("--authorization-mode=Webhook").string'
Returned Value: --authorization-mode=Webhook
--authorization-mode=Webhook
docker inspect kubelet | jq -e '.[0].Args[] | match("--client-ca-file=.*").string'
Returned Value: --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem
--read-only-port
docker inspect kubelet | jq -e '.[0].Args[] | match("--read-only-port=0").string'
Returned Value: --read-only-port=0
--read-only-port=0
--streaming-connection-idle-timeout
docker inspect kubelet | jq -e '.[0].Args[] | match("--streaming-connection-idle-timeout=.*").string'
Returned Value: --streaming-connection-idle-timeout=1800s
--streaming-connection-idle-timeout=1800s
--protect-kernel-defaults
docker inspect kubelet | jq -e '.[0].Args[] | match("--protect-kernel-defaults=true").string'
Returned Value: --protect-kernel-defaults=true
--protect-kernel-defaults=true
--make-iptables-util-chains
docker inspect kubelet | jq -e '.[0].Args[] | match("--make-iptables-util-chains=true").string'
Returned Value: --make-iptables-util-chains=true
--make-iptables-util-chains=true
Notes This is used by most cloud providers. Not setting this is not practical in most cases.
docker inspect kubelet | jq -e '.[0].Args[] | match("--hostname-override=.*").string'
Returned Value: --hostname-override=<ipv4 address>
--hostname-override=<ipv4 address>
Result: Fail
--event-qps
docker inspect kubelet | jq -e '.[0].Args[] | match("--event-qps=0").string'
Returned Value: --event-qps=0
--event-qps=0
RKE does not set these options and uses the kubelet’s self generated certificates for TLS communication. These files are located in the default directory (/var/lib/kubelet/pki).
/var/lib/kubelet/pki
docker inspect kubelet | jq -e '.[0].Args[] | match("--tls-cert-file=.*").string'
Audit (--tls-private-key-file)
docker inspect kubelet | jq -e '.[0].Args[] | match("--tls-private-key-file=.*").string'
--cadvisor-port
docker inspect kubelet | jq -e '.[0].Args[] | match("--cadvisor-port=0").string'
--rotate-certificates
RKE handles certificate rotation through an external process.
docker inspect kubelet | jq -e '.[0].Args[] | match("--rotate-certificates=true").string'
RotateKubeletServerCertificate
docker inspect kubelet | jq -e '.[0].Args[] | match("--feature-gates=.*(RotateKubeletServerCertificate=true).*").captures[].string'
docker inspect kubelet | jq -e '.[0].Args[] | match("--tls-cipher-suites=.*(TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256).*").captures[].string'
docker inspect kubelet | jq -e '.[0].Args[] | match("--tls-cipher-suites=.*(TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256).*").captures[].string'
docker inspect kubelet | jq -e '.[0].Args[] | match("--tls-cipher-suites=.*(TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305).*").captures[].string'
docker inspect kubelet | jq -e '.[0].Args[] | match("--tls-cipher-suites=.*(TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384).*").captures[].string'
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
docker inspect kubelet | jq -e '.[0].Args[] | match("--tls-cipher-suites=.*(TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305).*").captures[].string'
docker inspect kubelet | jq -e '.[0].Args[] | match("--tls-cipher-suites=.*(TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384).*").captures[].string'
docker inspect kubelet | jq -e '.[0].Args[] | match("--tls-cipher-suites=.*(TLS_RSA_WITH_AES_256_GCM_SHA384).*").captures[].string'
docker inspect kubelet | jq -e '.[0].Args[] | match("--tls-cipher-suites=.*(TLS_RSA_WITH_AES_128_GCM_SHA256).*").captures[].string'
docker inspect kubelet | jq -e '.[0].Args[] | match("--tls-cipher-suites=.*(CBC).*").captures[].string'
docker inspect kubelet | jq -e '.[0].Args[] | match("--tls-cipher-suites=.*(RC4).*").captures[].string'
kubelet.conf
This is the value of the --kubeconfig option.
--kubeconfig
stat -c %a /etc/kubernetes/ssl/kubecfg-kube-node.yaml
stat -c %U:%G /etc/kubernetes/ssl/kubecfg-kube-node.yaml
RKE doesn’t require or maintain a configuration file for kubelet. All configuration is passed in as arguments at container run time.
stat -c %a /etc/kubernetes/ssl/kubecfg-kube-proxy.yaml
stat -c %U:%G /etc/kubernetes/ssl/kubecfg-kube-proxy.yaml
stat -c %a /etc/kubernetes/ssl/kube-ca.pem
stat -c %U:%G /etc/kubernetes/ssl/kube-ca.pem