Note: If you have missed my previous articles on Docker and Kubernetes, you can find them here:
Application deployment models evolution.
Getting started with Docker.
Docker file and images.
Publishing images to Docker Hub and re-using them.
Docker- Find out what's going on.
Docker Networking- Part 1.
Docker Networking- Part 2.
Docker Swarm-Multi-Host container Cluster.
Docker Networking- Part 3 (Overlay Driver).
Introduction to Kubernetes.
Kubernetes- Diving in (Part 1)-Installing Kubernetes multi-node cluster.
Kubernetes-Diving in (Part2)- Services.
Kubernetes- Infrastructure As Code with Yaml (part 1).
Kubernetes- Infrastructure As Code Part 2- Creating PODs with YAML.
Kubernetes Infrastructure-as-Code part 3- Replicasets with YAML.
Kubernetes Infrastructure-as-Code part 4 - Deployments and Services with YAML.
Deploying a microservices APP with Kubernetes.
Kubernetes- Time based scaling of deployments with python client.
Kubernetes Networking - The Flannel network explained.
Kubernetes- Installing and using kubectl top for monitoring nodes and PoDs
Kubernetes Administration- Scheduling
Kubernetes Administration- Storage
Kubernetes Administration- Users
Kubernetes Administration - Network Policies with Calico network plugin
Service-Oriented Architecture (SoA) defines a methodology to develop/deploy software components that communicate with each other through well-defined interfaces like REST. These components can be re-used between enterprise applications. Microservices is the evolution of Service-Oriented Architecture. Microservices are independent mini services with REST interface (or something else like protoBuf) that make-up applications. Microservice is usually deployed as a container and many containers make up an application. One of the main advantages of the containerized application is it can run anywhere without code change- be it your local laptop or AWS cloud. Kubernetes is usually used for deploying container-based microservice applications.
Here is a quick recap of major Kubernetes components:
PoDs: Pods are the smallest unit that can be deployed in Kubernetes. The poD can be made up of multiple containers- these containers share the same IP address.
ReplicaSets: Replicasets running instances of similar PoDs and used for horizontal scaling.
Deployments: Deployments can be used to manage replicasets and also enables dynamic version updates.
ConfiMaps: ConfigMaps are key/value pairs that could be used as parameters to running PoDs.
Services: Services define how applications communicate within the cluster and externally. ClusterIP, NodePort, and LoadBalancer are three types of services possible in Kubernetes.
These components can be created imperatively or declaratively. Kubernetes supports YAML definition for declarative definition of components. For instance, service can be declared like so:
apiVersion: v1
kind: Service
metadata:
labels:
app: result
name: result
namespace: vote
spec:
type: NodePort
ports:- name: "result-service"
port: 5001
targetPort: 80
nodePort: 31001
selector:
app: result
Let's consider a scenario where there are 100's of applications with 1000's of PoDs, Deployment, services deployed across Kubernetes clusters and in AWS/GCP. Managing YAML files and maintaining these applications,YAML definitions be a nightmare unless there is some orchestration tool for this- Enter Rancher.
So what is Rancher. From rancher.com "Rancher is a complete software stack for teams adopting containers. It addresses the operational and security challenges of managing multiple Kubernetes clusters across any infrastructure, while providing DevOps teams with integrated tools for running containerized workloads."
Installing Rancher
To install rancher, the first step is to deploy the rancher server as docker container. I am going to do this on my worker node VM. It does not matter where this is deployed- it can be on any machine with docker, I am being lazy and just don't want to deploy another VM for this.
root@sathish-vm1:/home/sathish# docker run -d --restart=unless-stopped -p 80:80 -p 443:443 -v /home/sathish/rancher:/var/lib/rancher --privileged rancher/rancher
root@sathish-vm1:/home/sathish# docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e04ebe54b6c3 rancher/rancher "entrypoint.sh" 23 seconds ago Up 20 seconds 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp quizzical_burnell
5a750f34e082 bc9c328f379c "/usr/local/bin/kube…" 4 months ago Up 4 months k8s_kube-proxy_kube-proxy-lcf25_kube-system_7b55f742-ffdc-4b2f-ac07-aa1852dc941a_0
9c03fefaf4c1 k8s.gcr.io/pause:3.2 "/pause" 4 months ago Up 4 months k8s_POD_kube-proxy-lcf25_kube-system_7b55f742-ffdc-4b2f-ac07-aa1852dc941a_0
Now I can access docker WebUI with the IP of VM. Set the password and continue.
The next page is the Rancher server URL. One of the requirements is this URL should be accessible from the Kubernetes cluster. As I am installing Rancher on the worker node and going to manage this cluster, this requirement is implicitly satisfied and I just clicked "save url". If I wanted to manage public cloud-based clusters, I should install Rancher on a host with routable IP.
Adding Cluster
Click Add Cluster button on the home screen.
Under cluster, type choose "Other Cluster" and specify the name. This will give the following screen with commands to install Rancher agents.
Installing Rancher Agent
Paste the Kubectl command from the above screen on the master node, this will create the "Cattle-system" namespace and deploy required Kubernetes objects.
curl --insecure -sfL https://172.28.147.44/v3/import/wmjd6jll9mm5hx4cxvshrj9d2dqdbp58drp2cnvg47r8rtxn6k2m68.yaml | kubectl apply -f -
clusterrole.rbac.authorization.k8s.io/proxy-clusterrole-kubeapiserver created
clusterrolebinding.rbac.authorization.k8s.io/proxy-role-binding-kubernetes-master created
namespace/cattle-system created
serviceaccount/cattle created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
clusterrolebinding.rbac.authorization.k8s.io/cattle-admin-binding created
secret/cattle-credentials-7591665 created
clusterrole.rbac.authorization.k8s.io/cattle-admin created
deployment.apps/cattle-cluster-agent created
root@sathish-vm2:/home/sathish# kubectl get all -n cattle-system
NAME READY STATUS RESTARTS AGE
pod/cattle-cluster-agent-864c657fb-dh4hd 1/1 Running 0 35s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/cattle-cluster-agent 1/1 1 1 85s
NAME DESIRED CURRENT READY AGE
replicaset.apps/cattle-cluster-agent-864c657fb 1 1 1 35s
Once components are deployed you should be able to view the cluster status on your UI.
Various Objects can be viewed by clicking the submenus or you can use cluster explorer to get an overview of what's there.
Create Workloads (Deployments)
To deploy a new deployment, Click on Cluster Manager->Resources-> Workloads and fill in the required information. Rancher provides GUI for deploying all Kubernetes objects not just deployments.
root@sathish-vm2:/home/sathish# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
mynginx 2/2 2 2 101s
root@sathish-vm2:/home/sathish# kubectl get pods
NAME READY STATUS RESTARTS AGE
mynginx-6ff4cf5c8d-bnbmg 1/1 Running 0 105s
mynginx-6ff4cf5c8d-njxfl 1/1 Running 0 105s
Rancher when combined with other tools like Helm (I will blog about this later) offers a powerful toolkit for DevOps/CI-CD and enables faster deployments of microservices-based applications anywhere.
That's it for today folks. Have a good weekend :)
Comments