top of page
Writer's pictureSathish Kumar

Kubernetes- Installing and verifying a Multinode Kubernetes Cluster on Ubuntu- Step by Step

Updated: Jun 30, 2021



Kubernetes cluster can be setup on a single node with Minicube which could be used in development/test environments. Production deployments use a multi-node Kubernetes cluster. In the previous article, I showed you how to deploy a multinode Kubernetes cluster with Ubuntu hosts running Docker. In this article, let's look at some of the Kubernetes concepts and finally deploy a service in the multinode cluster.


Node



Node is a host with Docker (or another container runtime) installed. A node can be a Master or Slave (a.k.a Minion). When PoDs are deployed, they are spread across Nodes on the cluster to achieve optimum load balancing.


PoD




A PoD is the smallest unit of abstraction in Kubernetes. A single PoD can contain multiple containers that are part of the application. For a 3 tier application, a PoD could be made up of a webserver, application server, and DB server containers. On the other hand, PoDs can also contain a single container image. PoDs can be scaled with replica sets.



Replica Sets/Replication Controller


The replication controller ensures the required number of replicas of PoD is always running. If a PoD/node goes down, the replication controller deploys PoDs to satisfy the required number of replica sets.


Deployment


Deployment is a set of identical PoDs that are used to run replicas of an application/microservice in Kubernetes. Typically deployments are used in production and can be used to perform hitless updates to the application.


Kubernetes Networking


Unlike Docker Swarm, Kubernetes does not have a default network and expects users to setup a network. Some of the requirements of the Kubernetes network are :


  • All containers within a PoD should be able to communicate with one another (Without NAT).

  • Containers should be able to communicate with Nodes and vice versa (without NAT).

  • Services running in PoD should be exposed to the outside world.

Fortunately, there are many Kubernetes Networking solutions. Some of them are


  • Cisco ACI

  • Cilium

  • Flannel

  • Calico

  • VMWare-NSX T


Let's get started and setup a multinode cluster


Installing Kubernetes, Docker on Master and worker nodes




1. Disable swapfile in /etc/fstab by commenting it out and turn off swap.


# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sda1 during installation
UUID=74665848-fc50-4e30-bdcc-5ad47c295ecd / ext4 errors=remount-ro 0 1
#/swapfile none swap sw 0 0



root@sathish-vm2:~# swapoff -a

2. Install Kubernetes



root@sathish-vm2:~# apt-get update && sudo apt-get install -y apt-transport-https curl

root@sathish-vm2:~# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -


root@sathish-vm2:~# cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF

root@sathish-vm2:~# apt-get update

root@sathish-vm2:~# apt-get install -y kubelet kubeadm kubectl

root@sathish-vm2:~# apt-mark hold kubelet kubeadm kubectl





root@sathish-vm2:~# apt-get install apt-transport-https ca-certificates curl gnupg lsb-release

root@sathish-vm2:~# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
gpg: WARNING: unsafe ownership on homedir '/home/extreme/.gnupg'

root@sathish-vm2:~# echo \
> "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
> $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

root@sathish-vm2:~# apt-get update

root@sathish-vm2:~# apt-get install docker-ce docker-ce-cli containerd.io



Initialize master node



root@sathish-vm2:~# kubeadm init --pod-network-cidr=10.244.0.0/16
……………………………………
kubeadm join 10.127.13.203:6443 --token 1hgoov.dbks7wnix14rnad3 \
 --discovery-token-ca-cert-hash sha256:bcb73f5f258621b27a53f5cdc8e3d49bf6a6a499ccfb972caa883e86f1e46c24

Create .kubeconfig on master node


sathish@sathish-vm2:~$ mkdir -p $HOME/.kube
sathish@sathish-vm2:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[sudo] password for extreme:
Sorry, try again.
[sudo] password for extreme:
sathish@sathish-vm2:~$


Adding worker node to cluster


Use the join URL provided by kubeadm init to join the cluster


root@sathish-vm1:~# kubeadm join 10.127.13.203:6443 --token 1hgoov.dbks7wnix14rnad3 \
> --discovery-token-ca-cert-hash sha256:bcb73f5f258621b27a53f5cdc8e3d49bf6a6a499ccfb972caa883e86f1e46c24
………………………
root@sathish-vm2:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubemaster NotReady control-plane,master 13m v1.21.2
kubeworker NotReady <none> 41s v1.21.2


Note: The nodes are in “Not Ready” state as we have not installed a networking plugin required for CNI (Container Network Interface).


Installing container network plugin (flannel)



root@sathish-vm2:~# sysctl net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-iptables = 1



kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml


Check the nodes again


root@sathish-vm2:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubemaster Ready control-plane,master 18m v1.21.2
kubeworker Ready <none> 5m51s v1.21.2


Removing Taint on Master node


By default, scheduling of PODs is disabled on the master node. This is achieved using a “NoScheduleTaint”. This should be removed to enable Kubernetes to schedule workloads on master node.



root@sathish-vm2:~# kubectl describe nodes | grep Taint
Taints: node-role.kubernetes.io/master:NoSchedule
Taints: <none>

root@sathish-vm2:~# kubectl taint node kubemaster node-role.kubernetes.io/master:NoSchedule-
node/kubemaster untainted

root@sathish-vm2:~# kubectl describe nodes | grep Taint
Taints: <none>
Taints: <none>
Verifying install by deploying various objects

Creating and verifying k8 objects



Create and verify POD


root@sathish-vm2:~# kubectl run nginx --image=nginx:alpine
root@sathish-vm2:~# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 100s  10.244.1.4 kubeworker <none> <none>

Create ,Verify and scale Deployment


root@sathish-vm2:~# kubectl create deployment mynginx --image=nginx --replicas=2
deployment.apps/mynginx created


root@sathish-vm2:~# kubectl get deployment
NAME  READY UP-TO-DATE AVAILABLE AGE
mynginx 1/2 2 1 16s
root@sathish-vm2:~# kubectl get rs
NAME DESIRED CURRENT READY AGE
mynginx-5b686ccd46 2 2 2 79s
root@sathish-vm2:~# kubectl get pods
NAME READY STATUS RESTARTS AGE
mynginx-5b686ccd46-5bmtz 1/1 Running 0 82s
mynginx-5b686ccd46-prqtr 1/1 Running 0 82s

scaling mygnix
root@sathish-vm2:~# kubectl scale deployment mynginx --replicas=5
deployment.apps/mynginx scaled
root@sathish-vm2:~# kubectl get rs
NAME DESIRED CURRENT READY AGE
mynginx-5b686ccd46 5 5 2 2m20s

Hope this was informative. That's all for today folks.

1,331 views0 comments

Comentários


bottom of page