Note: If you have missed my previous articles on Docker and Kubernetes, you can find them here.
Application deployment models evolution.
Getting started with Docker.
Docker file and images.
Publishing images to Docker Hub and re-using them
Docker- Find out what's going on
Docker Networking- Part 1
Docker Networking- Part 2
Docker Swarm-Multi-Host container Cluster
Docker Networking- Part 3 (Overlay Driver)
Introduction to Kubernetes
Kubernetes- Diving in (Part 1)
Kubernetes-Diving in (Part2)- Services
Kubernetes- Infrastructure As Code with Yaml (part 1)
Kubernetes- Infrastructure As Code Part 2- Creating PODs with YAML
Kubernetes Infrastructure-as-Code part 3- Replicasets with YAML
When an application/service like MySQL or Apache is deployed as a container, there could be multiple instances of the service for scaling, fault tolerance, and redundancy- this is possible by making use of the replicaset object. In the previous article, I wrote about how to deploy replicasets with YAML.
Once an application is deployed, it must be periodically updated. Kubernetes enables users to upgrade one running instance of an application/replica at a time. This is called a rolling update. The Kubernetes object that enables stuff like rolling updates, rollback is Deployment.
For example, consider the following deployment with 3 instances running MySQL 8.0.
With the rolling update method, it is possible to upgrade one instance at a time (instance 1 below is upgraded to 8.0.22) and perform a rollover only if the update is successful.
If the upgrade is not successful for whatever reason it is possible to rollback to a previous version. Let's see these things in action.
The YAML file for deployment is very similar to the replicaset.
apiVersion: apps/v1
kind: Deployment
metadata:
name: database
labels:
app: mydb
tier: db
spec:
replicas: 3
template:
metadata:
name: mysqldb-pod
labels:
app: mydb
spec:
containers:
- name: mydb
image: mysql:5.6
env:
- name: MYSQL_ROOT_PASSWORD
value: sathish123
selector:
matchLabels:
app: mydb
I am going to deploy MYSQL version 5.6 with the above file.
Deployment like PODs, replicasets can be created with kubectl create -f <filename>
root@sathish-vm2:/home/sathish/deployment# kubectl create -f deployment.yaml
deployment.apps/database created
root@sathish-vm2:/home/sathish/deployment# kubectl get replicaset
NAME DESIRED CURRENT READY AGE
database-7cfd764b7d 3 3 3 8s
root@sathish-vm2:/home/sathish/deployment# kubectl get pods
NAME READY STATUS RESTARTS AGE
database-7cfd764b7d-jjvbg 1/1 Running 0 13s
database-7cfd764b7d-pztcj 1/1 Running 0 13s
database-7cfd764b7d-wrtvr 1/1 Running 0 13s
The creation of a deployment object creates a replicaset with the desired number of replicas. Each of these replicas is a POD running MySQL image.
root@sathish-vm2:/home/sathish/deployment# kubectl describe pods
Name: database-7cfd764b7d-jjvbg
Namespace: default
Priority: 0
Node: sathish-vm1/172.28.147.44
Start Time: Wed, 21 Oct 2020 13:40:17 +0000
Labels: app=mydb
pod-template-hash=7cfd764b7d
Annotations: <none>
Status: Running
IP: 10.244.1.23
IPs:
IP: 10.244.1.23
Controlled By: ReplicaSet/database-7cfd764b7d
Containers:
mydb:
Container ID: docker://07f4955e4ff475c70a12e1e960951e4d65eb90773b3c9961fd2830b689b40cf3
Image: mysql:5.6
Image ID: docker-pullable://mysql@sha256:ce3c841261675431fcffa70e221078d7466ecd242b658fdcf6a1c6af26f13c0d
Port: <none>
Host Port: <none>
State: Running
Started: Wed, 21 Oct 2020 13:40:19 +0000
Ready: True
Restart Count: 0
Environment:
MYSQL_ROOT_PASSWORD: sathish123
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-spkmz (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-spkmz:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-spkmz
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 22s Successfully assigned default/database-7cfd764b7d-jjvbg to sathish-vm1
Normal Pulled 22s kubelet, sathish-vm1 Container image "mysql:5.6" already present on machine
Normal Created 21s kubelet, sathish-vm1 Created container mydb
Normal Started 21s kubelet, sathish-vm1 Started container mydb
Name: database-7cfd764b7d-pztcj
Namespace: default
Priority: 0
Node: sathish-vm1/172.28.147.44
Start Time: Wed, 21 Oct 2020 13:40:17 +0000
Labels: app=mydb
pod-template-hash=7cfd764b7d
Annotations: <none>
Status: Running
IP: 10.244.1.22
IPs:
IP: 10.244.1.22
Controlled By: ReplicaSet/database-7cfd764b7d
Containers:
mydb:
Container ID: docker://07169b734151b1b3e859e709ca43ef606067cbef108375b9c76232675efe7770
Image: mysql:5.6
Image ID: docker-pullable://mysql@sha256:ce3c841261675431fcffa70e221078d7466ecd242b658fdcf6a1c6af26f13c0d
Port: <none>
Host Port: <none>
State: Running
Started: Wed, 21 Oct 2020 13:40:19 +0000
Ready: True
Restart Count: 0
Environment:
MYSQL_ROOT_PASSWORD: sathish123
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-spkmz (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-spkmz:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-spkmz
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 22s Successfully assigned default/database-7cfd764b7d-pztcj to sathish-vm1
Normal Pulled 22s kubelet, sathish-vm1 Container image "mysql:5.6" already present on machine
Normal Created 21s kubelet, sathish-vm1 Created container mydb
Normal Started 21s kubelet, sathish-vm1 Started container mydb
Name: database-7cfd764b7d-wrtvr
Namespace: default
Priority: 0
Node: sathish-vm1/172.28.147.44
Start Time: Wed, 21 Oct 2020 13:40:17 +0000
Labels: app=mydb
pod-template-hash=7cfd764b7d
Annotations: <none>
Status: Running
IP: 10.244.1.24
IPs:
IP: 10.244.1.24
Controlled By: ReplicaSet/database-7cfd764b7d
Containers:
mydb:
Container ID: docker://a5595c508a3bee30889e22900f67d42bce2735d6633988474e230908048347d0
Image: mysql:5.6
Image ID: docker-pullable://mysql@sha256:ce3c841261675431fcffa70e221078d7466ecd242b658fdcf6a1c6af26f13c0d
Port: <none>
Host Port: <none>
State: Running
Started: Wed, 21 Oct 2020 13:40:19 +0000
Ready: True
Restart Count: 0
Environment:
MYSQL_ROOT_PASSWORD: sathish123
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-spkmz (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-spkmz:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-spkmz
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 22s Successfully assigned default/database-7cfd764b7d-wrtvr to sathish-vm1
Normal Pulled 22s kubelet, sathish-vm1 Container image "mysql:5.6" already present on machine
Normal Created 21s kubelet, sathish-vm1 Created container mydb
Normal Started 21s kubelet, sathish-vm1 Started container mydb
Now I am going to perform the rolling update of the deployment. To simulate a failure scenario, I am going to specify a nonexistent MySQL version (12.0)
apiVersion: apps/v1
kind: Deployment
metadata:
name: database
labels:
app: mydb
tier: db
spec:
replicas: 3
template:
metadata:
name: mysqldb-pod
labels:
app: mydb
spec:
containers:
- name: mydb
image: mysql:12.0
env:
- name: MYSQL_ROOT_PASSWORD
value: sathish123
selector:
matchLabels:
app: mydb
Changes to any Kubernetes object can be applied with kubectl apply -f <filename> command. The record option is used to track change history.
root@sathish-vm2:/home/sathish/deployment# kubectl apply -f deployment.yaml --record
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
deployment.apps/database configured
Let's try to list the PODs
root@sathish-vm2:/home/sathish/deployment# kubectl get pods
NAME READY STATUS RESTARTS AGE
database-64d6b7cb54-qxkvp 0/1 ImagePullBackOff 0 2m9s
database-7cfd764b7d-jjvbg 1/1 Running 0 17m
database-7cfd764b7d-pztcj 1/1 Running 0 17m
database-7cfd764b7d-wrtvr 1/1 Running 0 17m
Kubernetes has attempted to create a new POD with version 12.0 MYSQL image. As the image is not available, the POD creation is failing. In the meantime, users will still be able to access the deployment with existing replicas.
root@sathish-vm2:/home/sathish/deployment# kubectl get pods
NAME READY STATUS RESTARTS AGE
database-64d6b7cb54-qxkvp 0/1 ImagePullBackOff 0 6m33s
database-7cfd764b7d-jjvbg 1/1 Running 0 21m
database-7cfd764b7d-pztcj 1/1 Running 0 21m
database-7cfd764b7d-wrtvr 1/1 Running 0 21m
root@sathish-vm2:/home/sathish/deployment# kubectl describe pods database-64d6b7cb54-qxkvp
Name: database-64d6b7cb54-qxkvp
Namespace: default
Priority: 0
Node: sathish-vm1/172.28.147.44
Start Time: Wed, 21 Oct 2020 13:55:23 +0000
Labels: app=mydb
pod-template-hash=64d6b7cb54
Annotations: <none>
Status: Pending
IP: 10.244.1.25
IPs:
IP: 10.244.1.25
Controlled By: ReplicaSet/database-64d6b7cb54
Containers:
mydb:
Container ID:
Image: mysql:12.0
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
MYSQL_ROOT_PASSWORD: sathish123
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-spkmz (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-spkmz:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-spkmz
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6m44s Successfully assigned default/database-64d6b7cb54-qxkvp to sathish-vm1
Warning Failed 6m33s kubelet, sathish-vm1 Failed to pull image "mysql:12.0": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io: Temporary failure in name resolution
Normal Pulling 4m40s (x4 over 6m43s) kubelet, sathish-vm1 Pulling image "mysql:12.0"
Warning Failed 4m27s (x4 over 6m33s) kubelet, sathish-vm1 Error: ErrImagePull
Warning Failed 4m27s (x3 over 6m18s) kubelet, sathish-vm1 Failed to pull image "mysql:12.0": rpc error: code = Unknown desc = Error response from daemon: manifest for mysql:12.0 not found: manifest unknown: manifest unknown
Warning Failed 3m58s (x7 over 6m33s) kubelet, sathish-vm1 Error: ImagePullBackOff
Normal BackOff 95s (x16 over 6m33s) kubelet, sathish-vm1 Back-off pulling image "mysql:12.0"
Let's try to fix the error by specifying a valid image tag. I will just use the tag "latest".
apiVersion: apps/v1
kind: Deployment
metadata:
name: database
labels:
app: mydb
tier: db
spec:
replicas: 3
template:
metadata:
name: mysqldb-pod
labels:
app: mydb
spec:
containers:
- name: mydb
image: mysql:latest
env:
- name: MYSQL_ROOT_PASSWORD
value: sathish123
selector:
matchLabels:
app: mydb
Applying the deployment
root@sathish-vm2:/home/sathish/deployment# kubectl apply -f deployment.yaml --record
deployment.apps/database configured
root@sathish-vm2:/home/sathish/deployment# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
database 3/3 3 3 32m
root@sathish-vm2:/home/sathish/deployment# kubectl get pods
NAME READY STATUS RESTARTS AGE
database-7b48f67576-278g6 1/1 Running 0 4s
database-7b48f67576-8dkhd 1/1 Running 0 7s
database-7b48f67576-tqbhc 1/1 Running 0 9s
database-7cfd764b7d-wrtvr 1/1 Terminating 0 32m
root@sathish-vm2:/home/sathish/deployment# kubectl get pods
NAME READY STATUS RESTARTS AGE
database-7b48f67576-278g6 1/1 Running 0 76s
database-7b48f67576-8dkhd 1/1 Running 0 79s
database-7b48f67576-tqbhc 1/1 Running 0 81s
Kubernetes terminated instances one-by-one, performed a rolling update, and then brought up the new instances.
root@sathish-vm2:/home/sathish/deployment# kubectl describe deployment
Name: database
Namespace: default
CreationTimestamp: Wed, 21 Oct 2020 13:40:17 +0000
Labels: app=mydb
tier=db
Annotations: deployment.kubernetes.io/revision: 3
kubernetes.io/change-cause: kubectl apply --filename=deployment.yaml --record=true
Selector: app=mydb
Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=mydb
Containers:
mydb:
Image: mysql:latest
Port: <none>
Host Port: <none>
Environment:
MYSQL_ROOT_PASSWORD: sathish123
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: database-7b48f67576 (3/3 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 35m deployment-controller Scaled up replica set database-7cfd764b7d to 3
Normal ScalingReplicaSet 19m deployment-controller Scaled up replica set database-64d6b7cb54 to 1
Normal ScalingReplicaSet 2m36s deployment-controller Scaled down replica set database-64d6b7cb54 to 0
Normal ScalingReplicaSet 2m36s deployment-controller Scaled up replica set database-7b48f67576 to 1
Normal ScalingReplicaSet 2m34s deployment-controller Scaled down replica set database-7cfd764b7d to 2
Normal ScalingReplicaSet 2m34s deployment-controller Scaled up replica set database-7b48f67576 to 2
Normal ScalingReplicaSet 2m32s deployment-controller Scaled down replica set database-7cfd764b7d to 1
Normal ScalingReplicaSet 2m31s deployment-controller Scaled up replica set database-7b48f67576 to 3
Normal ScalingReplicaSet 2m29s deployment-controller Scaled down replica set database-7cfd764b7d to 0
Let's examine the PODs
root@sathish-vm2:/home/sathish/deployment# kubectl get pods
NAME READY STATUS RESTARTS AGE
database-7b48f67576-278g6 1/1 Running 0 5m19s
database-7b48f67576-8dkhd 1/1 Running 0 5m22s
database-7b48f67576-tqbhc 1/1 Running 0 5m24s
root@sathish-vm2:/home/sathish/deployment# kubectl describe pods database-7b48f67576-278g6
Name: database-7b48f67576-278g6
Namespace: default
Priority: 0
Node: sathish-vm1/172.28.147.44
Start Time: Wed, 21 Oct 2020 14:12:46 +0000
Labels: app=mydb
pod-template-hash=7b48f67576
Annotations: <none>
Status: Running
IP: 10.244.1.28
IPs:
IP: 10.244.1.28
Controlled By: ReplicaSet/database-7b48f67576
Containers:
mydb:
Container ID: docker://73736a2ad1fb75b0a4815addb019d20c1261be3f945066e98f7fb7fa41ea2675
Image: mysql:latest
Image ID: docker-pullable://mysql@sha256:b30e3c13ab71f51c7951120826671d56586afb8d9e1988c480b8673c8570eb74
Port: <none>
Host Port: <none>
State: Running
Started: Wed, 21 Oct 2020 14:12:47 +0000
Ready: True
Restart Count: 0
Environment:
MYSQL_ROOT_PASSWORD: sathish123
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-spkmz (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-spkmz:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-spkmz
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m30s Successfully assigned default/database-7b48f67576-278g6 to sathish-vm1
Normal Pulled 5m29s kubelet, sathish-vm1 Container image "mysql:latest" already present on machine
Normal Created 5m29s kubelet, sathish-vm1 Created container mydb
Normal Started 5m29s kubelet, sathish-vm1 Started container mydb
The PODs are using the latest version of MySQL.
Now that I have created a deployment, I can create a service to allow users to access this deployment. The type of service I am going to use is NodePort.
Note: In the real world, chances are MySQL PoDs will be deployed as MySQL cluster and each instance will be able to read/write data from a common data store.
Here is my YAML file for the service
apiVersion: v1
kind: Service
metadata:
name: database
labels:
app: mydb
spec:
type: NodePort
ports:
- port: 3306
targetPort: 3306
nodePort: 32306
selector:
app: mydb
MySQL listens on port 3306, hence the port and targetPort are set to 3306.
nodePort is set to 32306. This is the port users can use to access the MySQL service.
Selector: The selector uses the "app" label which matches what we have specified in deployment (mydb).
Let's apply this new YAML file
root@sathish-vm2:/home/sathish/service# kubectl apply -f mysql-service.yaml
service/database created
root@sathish-vm2:/home/sathish/service# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
database NodePort 10.106.23.124 <none> 3306:32306/TCP 4s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11d
root@sathish-vm2:/home/sathish/service# kubectl describe service database
Name: database
Namespace: default
Labels: app=mydb
Annotations: <none>
Selector: app=mydb
Type: NodePort
IP: 10.106.23.124
Port: <unset> 3306/TCP
TargetPort: 3306/TCP
NodePort: <unset> 32306/TCP
Endpoints: 10.244.1.26:3306,10.244.1.27:3306,10.244.1.28:3306
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
As expected, each PoD is now an endpoint- the endpoint IP/port can be used to access the service within Kubernetes cluster nodes. However, if we want to access the database service from the external network we have to use the nodePort which is 32306.
Let me try to access the service from a Windows client (Heidi SQL).
The reason I am getting an access denied message is, "root" logins are not allowed from remote hosts by default. Let me fix that by adding MYSQL_ROOT_HOST environmental variable and re-applying the deployment.
Note/Caution: In real world, allowing root access to DB from any host is very bad idea.
Here is my updated YAML file for database deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: database
labels:
app: mydb
tier: db
spec:
replicas: 3
template:
metadata:
name: mysqldb-pod
labels:
app: mydb
spec:
containers:
- name: mydb
image: mysql:latest
env:
- name: MYSQL_ROOT_PASSWORD
value: sathish123
- name: MYSQL_ROOT_HOST
value: "%"
selector:
matchLabels:
app: mydb
Let's update the deployment by reapplying the YAML file
root@sathish-vm2:/home/sathish/deployment# kubectl apply -f deployment.yaml
deployment.apps/database configured
Now, when I try to access databases from the remote host it works perfectly.
That's all for today folks. Hope this brief introduction to deployment was useful. Have a great day!!
Comments