Note: If you have missed my previous articles on Docker and Kubernetes, you can find them here.
Application deployment models evolution.
Getting started with Docker.
Docker file and images.
Publishing images to Docker Hub and re-using them
Docker- Find out what's going on
Docker Networking- Part 1
Docker Networking- Part 2
Docker Swarm-Multi-Host container Cluster
Docker Networking- Part 3 (Overlay Driver)
Introduction to Kubernetes
Kubernetes- Diving in (Part 1)-Installing Kubernetes multi-node cluster
Kubernetes-Diving in (Part2)- Services
Kubernetes- Infrastructure As Code with Yaml (part 1)
Kubernetes- Infrastructure As Code Part 2- Creating PODs with YAML
Kubernetes Infrastructure-as-Code part 3- Replicasets with YAML
Kubernetes Infrastructure-as-Code part 4 - Deployments and Services with YAML
Deploying a microservices APP with Kubernetes
Kubernetes- Time based scaling of deployments with python client
Kubernetes Networking - The Flannel network explained
When deploying a microservices-based application on a Kubernetes cluster, it is essential to monitor the health of PoDs and also nodes where the PoDs are deployed. Kubernetes has built-in objects like replicasets that automatically recreate a PoD if it crashes and ensures the desired number of copies of an application component is maintained. However, what about the health of nodes on which these PoDs are running i.e CPU and mem utilization? Further, how do we know if the PoDs themselves are healthy i.e they are not gasping for CPU cycles or getting greedy? In this article, I will show you how to install and use kubectl top utility which allows users to monitor PoDs and nodes.
Before we get to installation, let's talk a bit about how the Kubernetes scheduler works. The Kubernetes scheduler is responsible for deciding which PoD goes to which node ( the action of placing PoD on a node is done by Kublet running on the node). The scheduler decides the placement of PoDs based on certain criteria like CPU, Memory requirement of a PoD vs the amount of CPU/Memory available on a node. It is possible for a user to override the default behavior of the K8 scheduler. One example is static node selection (refer busybox3 Yaml file I used in the flannel network article). With this, we can infer the following:
The scheduler will not schedule PoD if none of the nodes on the network satisfies the resource requirement of PoD.
It is possible to overwhelm a node by manually placing PoDs (either with static node selection or other methods like taints/tolerations, affinity groups).
PoDs themselves can overwhelm hardware resources (it is possible to limit resources when defining PoDs- that's a separate topic).
"Kubectl top" described here enables users to monitor the resources of PoDs and nodes. This is similar to the top command on Linux. "Kubectl top" internally uses metrics-server. Let's install and try out kubectl top.
#Metrics server not installed by default.
root@sathish-vm2:/home/sathish/metrics-server# kubectl top pods
error: Metrics API not available
root@sathish-vm2:/home/sathish# git clone https://github.com/kubernetes-sigs/metrics-server
Cloning into 'metrics-server'...
remote: Enumerating objects: 68, done.
remote: Counting objects: 100% (68/68), done.
remote: Compressing objects: 100% (64/64), done.
remote: Total 12645 (delta 25), reused 20 (delta 3), pack-reused 12577
Receiving objects: 100% (12645/12645), 12.57 MiB | 2.45 MiB/s, done.
Resolving deltas: 100% (6606/6606), done.
# Deploy metrics server
root@sathish-vm2:/home/sathish# kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
This should have worked and I should have been able to use kubectl top, but unfortunately it did not work- let's dig in and find out what's going on.
Note: I installed K8 with kubeadmn
root@sathish-vm2:/home/sathish# kubectl get deployment -n kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
coredns 2/2 2 2 45d
metrics-server 0/1 1 0 12s
root@sathish-vm2:/home/sathish# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-f9fd979d6-d8wzr 1/1 Running 0 45d
coredns-f9fd979d6-xcxzc 1/1 Running 0 45d
etcd-sathish-vm2 1/1 Running 0 45d
kube-apiserver-sathish-vm2 1/1 Running 0 45d
kube-controller-manager-sathish-vm2 1/1 Running 19 45d
kube-flannel-ds-cq56k 1/1 Running 0 45d
kube-flannel-ds-x7prc 1/1 Running 0 45d
kube-proxy-lcf25 1/1 Running 0 45d
kube-proxy-tf8z8 1/1 Running 0 45d
kube-scheduler-sathish-vm2 1/1 Running 15 45d
metrics-server-5d5c49f488-f6qjz 0/1 Running 1 28s
root@sathish-vm2:/home/sathish# kubectl logs metrics-server-5d5c49f488-f6qjz -n kube-system
E1129 07:20:59.860754 1 server.go:132] unable to fully scrape metrics: [unable to fully scrape metrics from node sathish-vm1: unable to fetch metrics from node sathish-vm1: Get "https://172.28.147.44:10250/stats/summary?only_cpu_and_memory=true": x509: cannot validate certificate for 172.28.147.44 because it doesn't contain any IP SANs, unable to fully scrape metrics from node sathish-vm2: unable to fetch metrics from node sathish-vm2: Get "https://172.28.147.38:10250/stats/summary?only_cpu_and_memory=true": x509: cannot validate certificate for 172.28.147.38 because it doesn't contain any IP SANs]
I1129 07:20:59.860923 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I1129 07:20:59.860939 1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
I1129 07:20:59.860978 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1129 07:20:59.860982 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1129 07:20:59.861006 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I1129 07:20:59.861010 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I1129 07:20:59.861747 1 dynamic_serving_content.go:130] Starting serving-cert::/tmp/apiserver.crt::/tmp/apiserver.key
I1129 07:20:59.862338 1 secure_serving.go:197] Serving securely on [::]:4443
I1129 07:20:59.862378 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I1129 07:20:59.961257 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I1129 07:20:59.961265 1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController
I1129 07:20:59.961322 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
As we can see, the issue is due to incorrect certificates. I haven't setup certificates or PKI on my Linux host and do not care as it's a demo/lab environment. Bit of googling and it turns out this is a known issue and many users have reported it here. The fix suggested is to use "insecure TLS". We can do that by modifying the YAML file for metrics-server deployment and reapplying it.
Here is the updated YAML file
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-view: "true"
name: system:aggregated-metrics-reader
rules:
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- pods
- nodes
- nodes/stats
- namespaces
- configmaps
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
k8s-app: metrics-server
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --kubelet-insecure-tls
image: k8s.gcr.io/metrics-server/metrics-server:v0.4.1
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
port: https
scheme: HTTPS
periodSeconds: 10
name: metrics-server
ports:
- containerPort: 4443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: https
scheme: HTTPS
periodSeconds: 10
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- mountPath: /tmp
name: tmp-dir
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
volumes:
- emptyDir: {}
name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
labels:
k8s-app: metrics-server
name: v1beta1.metrics.k8s.io
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: kube-system
version: v1beta1
versionPriority: 100
Let's apply this changed YAML file
root@sathish-vm2:/home/sathish# kubectl apply -f metrics-server.yaml
serviceaccount/metrics-server unchanged
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader unchanged
clusterrole.rbac.authorization.k8s.io/system:metrics-server unchanged
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader unchanged
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator unchanged
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server unchanged
service/metrics-server created
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
deployment.apps/metrics-server configured
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io unchanged
This seems to have worked
root@sathish-vm2:/home/sathish# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-f9fd979d6-d8wzr 1/1 Running 0 45d
coredns-f9fd979d6-xcxzc 1/1 Running 0 45d
etcd-sathish-vm2 1/1 Running 0 45d
kube-apiserver-sathish-vm2 1/1 Running 0 45d
kube-controller-manager-sathish-vm2 1/1 Running 19 45d
kube-flannel-ds-cq56k 1/1 Running 0 45d
kube-flannel-ds-x7prc 1/1 Running 0 45d
kube-proxy-lcf25 1/1 Running 0 45d
kube-proxy-tf8z8 1/1 Running 0 45d
kube-scheduler-sathish-vm2 1/1 Running 15 45d
metrics-server-56c59cf9ff-mk9zt 1/1 Running 0 4m48s
Now let's try out kubectl top
root@sathish-vm2:/home/sathish# kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
sathish-vm1 45m 2% 899Mi 49%
sathish-vm2 170m 8% 1537Mi 49%
root@sathish-vm2:/home/sathish# kubectl top nodes --sort-by=cpu
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
sathish-vm2 170m 8% 1537Mi 49%
sathish-vm1 45m 2% 899Mi 49%
root@sathish-vm2:/home/sathish# kubectl top nodes --sort-by=memory
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
sathish-vm2 176m 8% 1537Mi 49%
sathish-vm1 47m 2% 894Mi 49%
root@sathish-vm2:/home/sathish# kubectl top pods --all-namespaces
NAMESPACE NAME CPU(cores) MEMORY(bytes)
kube-system coredns-f9fd979d6-d8wzr 5m 8Mi
kube-system coredns-f9fd979d6-xcxzc 4m 8Mi
kube-system etcd-sathish-vm2 20m 28Mi
kube-system kube-apiserver-sathish-vm2 67m 265Mi
kube-system kube-controller-manager-sathish-vm2 25m 45Mi
kube-system kube-flannel-ds-cq56k 2m 10Mi
kube-system kube-flannel-ds-x7prc 2m 10Mi
kube-system kube-proxy-lcf25 1m 24Mi
kube-system kube-proxy-tf8z8 1m 16Mi
kube-system kube-scheduler-sathish-vm2 4m 16Mi
kube-system metrics-server-56c59cf9ff-mk9zt 5m 13Mi
# Creating PoD in default namespace
root@sathish-vm2:/home/sathish# kubectl run webserv --image=nginx
pod/webserv created
root@sathish-vm2:/home/sathish# kubectl top pods
NAME CPU(cores) MEMORY(bytes)
webserv 0m 3Mi
Thanks for your time and I do hope this short write-up is useful. Happy weekend and stay safe!
Comentários