Illumina Innovates with Rancher and Kubernetes
This section describes HPA management with kubectl. This document has instructions for how to:
kubectl
In Rancher v2.3.x, you can create, view, and delete HPAs from the Rancher UI. You can also configure them to scale based on CPU or memory usage from the Rancher UI. For more information, refer to Managing HPAs with the Rancher UI. For scaling HPAs based on other metrics than CPU or memory, you still need kubectl.
Clusters created with older versions of Rancher don’t automatically have all the requirements to create an HPA. To install an HPA on these clusters, refer to Manual HPA Installation for Clusters Created Before Rancher v2.0.7.
If you have an HPA manifest file, you can create, manage, and delete HPAs using kubectl:
Creating HPA
With manifest: kubectl create -f <HPA_MANIFEST>
kubectl create -f <HPA_MANIFEST>
Without manifest (Just support CPU): kubectl autoscale deployment hello-world --min=2 --max=5 --cpu-percent=50
kubectl autoscale deployment hello-world --min=2 --max=5 --cpu-percent=50
Getting HPA info
Basic: kubectl get hpa hello-world
kubectl get hpa hello-world
Detailed description: kubectl describe hpa hello-world
kubectl describe hpa hello-world
Deleting HPA
kubectl delete hpa hello-world
The HPA manifest is the config file used for managing an HPA with kubectl.
The following snippet demonstrates use of different directives in an HPA manifest. See the list below the sample to understand the purpose of each directive.
apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: hello-world spec: scaleTargetRef: apiVersion: extensions/v1beta1 kind: Deployment name: hello-world minReplicas: 1 maxReplicas: 10 metrics: - type: Resource resource: name: cpu targetAverageUtilization: 50 - type: Resource resource: name: memory targetAverageValue: 100Mi
apiVersion: autoscaling/v2beta1
autoscaling
name: hello-world
hello-word
minReplicas: 1
maxReplicas: 10
targetAverageUtilization: 50
targetAverageValue: 100Mi
Clusters created in Rancher v2.0.7 and higher have all the requirements needed (metrics-server and Kubernetes cluster configuration) to use Horizontal Pod Autoscaler. Run the following commands to check if metrics are available in your installation:
$ kubectl top nodes NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% node-controlplane 196m 9% 1623Mi 42% node-etcd 80m 4% 1090Mi 28% node-worker 64m 3% 1146Mi 29% $ kubectl -n kube-system top pods NAME CPU(cores) MEMORY(bytes) canal-pgldr 18m 46Mi canal-vhkgr 20m 45Mi canal-x5q5v 17m 37Mi canal-xknnz 20m 37Mi kube-dns-7588d5b5f5-298j2 0m 22Mi kube-dns-autoscaler-5db9bbb766-t24hw 0m 5Mi metrics-server-97bc649d5-jxrlt 0m 12Mi $ kubectl -n kube-system logs -l k8s-app=metrics-server I1002 12:55:32.172841 1 heapster.go:71] /metrics-server --source=kubernetes.summary_api:https://kubernetes.default.svc?kubeletHttps=true&kubeletPort=10250&useServiceAccount=true&insecure=true I1002 12:55:32.172994 1 heapster.go:72] Metrics Server version v0.2.1 I1002 12:55:32.173378 1 configs.go:61] Using Kubernetes client with master "https://kubernetes.default.svc" and version I1002 12:55:32.173401 1 configs.go:62] Using kubelet port 10250 I1002 12:55:32.173946 1 heapster.go:128] Starting with Metric Sink I1002 12:55:32.592703 1 serving.go:308] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key) I1002 12:55:32.925630 1 heapster.go:101] Starting Heapster API server... [restful] 2018/10/02 12:55:32 log.go:33: [restful/swagger] listing is available at https:///swaggerapi [restful] 2018/10/02 12:55:32 log.go:33: [restful/swagger] https:///swaggerui/ is mapped to folder /swagger-ui/ I1002 12:55:32.928597 1 serve.go:85] Serving securely on 0.0.0.0:443
If you have created your cluster in Rancher v2.0.6 or before, please refer to Manual installation
You can configure HPA to autoscale based on custom metrics provided by third-party software. The most common use case for autoscaling using third-party software is based on application-level metrics (i.e., HTTP requests per second). HPA uses the custom.metrics.k8s.io API to consume these metrics. This API is enabled by deploying a custom metrics adapter for the metrics collection solution.
custom.metrics.k8s.io
For this example, we are going to use Prometheus. We are beginning with the following assumptions:
http://prometheus.mycompany.io:80
Prometheus is available for deployment in the Rancher v2.0 catalog. Deploy it from Rancher catalog if it isn’t already running in your cluster.
For HPA to use custom metrics from Prometheus, package k8s-prometheus-adapter is required in the kube-system namespace of your cluster. To install k8s-prometheus-adapter, we are using the Helm chart available at banzai-charts.
kube-system
k8s-prometheus-adapter
Initialize Helm in your cluster.
# kubectl -n kube-system create serviceaccount tiller kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller helm init --service-account tiller
Clone the banzai-charts repo from GitHub:
banzai-charts
# git clone https://github.com/banzaicloud/banzai-charts
Install the prometheus-adapter chart, specifying the Prometheus URL and port number.
prometheus-adapter
# helm install --name prometheus-adapter banzai-charts/prometheus-adapter --set prometheus.url="http://prometheus.mycompany.io",prometheus.port="80" --namespace kube-system
Check that prometheus-adapter is running properly. Check the service pod and logs in the kube-system namespace.
Check that the service pod is Running. Enter the following command.
Running
# kubectl get pods -n kube-system
From the resulting output, look for a status of Running.
NAME READY STATUS RESTARTS AGE ... prometheus-adapter-prometheus-adapter-568674d97f-hbzfx 1/1 Running 0 7h ...
Check the service logs to make sure the service is running correctly by entering the command that follows.
# kubectl logs prometheus-adapter-prometheus-adapter-568674d97f-hbzfx -n kube-system
Then review the log output to confirm the service is running.
... I0724 10:18:45.696679 1 round_trippers.go:436] GET https://10.43.0.1:443/api/v1/namespaces/default/pods?labelSelector=app%3Dhello-world 200 OK in 2 milliseconds I0724 10:18:45.696695 1 round_trippers.go:442] Response Headers: I0724 10:18:45.696699 1 round_trippers.go:445] Date: Tue, 24 Jul 2018 10:18:45 GMT I0724 10:18:45.696703 1 round_trippers.go:445] Content-Type: application/json I0724 10:18:45.696706 1 round_trippers.go:445] Content-Length: 2581 I0724 10:18:45.696766 1 request.go:836] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/default/pods","resourceVersion":"6237"},"items":[{"metadata":{"name":"hello-world-54764dfbf8-q6l82","generateName":"hello-world-54764dfbf8-","namespace":"default","selfLink":"/api/v1/namespaces/default/pods/hello-world-54764dfbf8-q6l82","uid":"484cb929-8f29-11e8-99d2-067cac34e79c","resourceVersion":"4066","creationTimestamp":"2018-07-24T10:06:50Z","labels":{"app":"hello-world","pod-template-hash":"1032089694"},"annotations":{"cni.projectcalico.org/podIP":"10.42.0.7/32"},"ownerReferences":[{"apiVersion":"extensions/v1beta1","kind":"ReplicaSet","name":"hello-world-54764dfbf8","uid":"4849b9b1-8f29-11e8-99d2-067cac34e79c","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"default-token-ncvts","secret":{"secretName":"default-token-ncvts","defaultMode":420}}],"containers":[{"name":"hello-world","image":"rancher/hello-world","ports":[{"containerPort":80,"protocol":"TCP"}],"resources":{"requests":{"cpu":"500m","memory":"64Mi"}},"volumeMounts":[{"name":"default-token-ncvts","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"34.220.18.140","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}]},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-07-24T10:06:50Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-07-24T10:06:54Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-07-24T10:06:50Z"}],"hostIP":"34.220.18.140","podIP":"10.42.0.7","startTime":"2018-07-24T10:06:50Z","containerStatuses":[{"name":"hello-world","state":{"running":{"startedAt":"2018-07-24T10:06:54Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"rancher/hello-world:latest","imageID":"docker-pullable://rancher/hello-world@sha256:4b1559cb4b57ca36fa2b313a3c7dde774801aa3a2047930d94e11a45168bc053","containerID":"docker://cce4df5fc0408f03d4adf82c90de222f64c302bf7a04be1c82d584ec31530773"}],"qosClass":"Burstable"}}]} I0724 10:18:45.699525 1 api.go:74] GET http://prometheus-server.prometheus.34.220.18.140.xip.io/api/v1/query?query=sum%28rate%28container_fs_read_seconds_total%7Bpod_name%3D%22hello-world-54764dfbf8-q6l82%22%2Ccontainer_name%21%3D%22POD%22%2Cnamespace%3D%22default%22%7D%5B5m%5D%29%29+by+%28pod_name%29&time=1532427525.697 200 OK I0724 10:18:45.699620 1 api.go:93] Response Body: {"status":"success","data":{"resultType":"vector","result":[{"metric":{"pod_name":"hello-world-54764dfbf8-q6l82"},"value":[1532427525.697,"0"]}]}} I0724 10:18:45.699939 1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/fs_read?labelSelector=app%3Dhello-world: (12.431262ms) 200 [[kube-controller-manager/v1.10.1 (linux/amd64) kubernetes/d4ab475/system:serviceaccount:kube-system:horizontal-pod-autoscaler] 10.42.0.0:24268] I0724 10:18:51.727845 1 request.go:836] Request Body: {"kind":"SubjectAccessReview","apiVersion":"authorization.k8s.io/v1beta1","metadata":{"creationTimestamp":null},"spec":{"nonResourceAttributes":{"path":"/","verb":"get"},"user":"system:anonymous","group":["system:unauthenticated"]},"status":{"allowed":false}} ...
Check that the metrics API is accessible from kubectl.
If you are accessing the cluster directly, enter your Server URL in the kubectl config in the following format: https://<Kubernetes_URL>:6443.
https://<Kubernetes_URL>:6443
# kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1
If the API is accessible, you should receive output that’s similar to what follows.
{"kind":"APIResourceList","apiVersion":"v1","groupVersion":"custom.metrics.k8s.io/v1beta1","resources":[{"name":"pods/fs_usage_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_rss","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_cpu_period","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_cfs_throttled","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_io_time","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_read","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_sector_writes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_user","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/last_seen","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/tasks_state","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_cpu_quota","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/start_time_seconds","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_write","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_cache","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_usage_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_cfs_periods","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_cfs_throttled_periods","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_reads_merged","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_working_set_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/network_udp_usage","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_inodes_free","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_inodes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_io_time_weighted","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_failures","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_swap","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_cpu_shares","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_memory_swap_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_usage","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_io_current","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_writes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_failcnt","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_reads","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_writes_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_writes_merged","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/network_tcp_usage","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_max_usage_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_memory_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_memory_reservation_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_load_average_10s","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_system","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_reads_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_sector_reads","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]}]}
If you are accessing the cluster through Rancher, enter your Server URL in the kubectl config in the following format: https://<RANCHER_URL>/k8s/clusters/<CLUSTER_ID>. Add the suffix /k8s/clusters/<CLUSTER_ID> to API path.
https://<RANCHER_URL>/k8s/clusters/<CLUSTER_ID>
/k8s/clusters/<CLUSTER_ID>
# kubectl get --raw /k8s/clusters/<CLUSTER_ID>/apis/custom.metrics.k8s.io/v1beta1