top of page
Writer's pictureSathish Kumar

Docker Swarm- Multi-host container cluster

Updated: Jan 15, 2021

Docker allows ease of deployment of "microservices" with containers. These containers run on different servers and need to be orchestrated. Docker swarm is Docker's built-in orchestration system.


So what is Docker Swarm? From Docker Documentation :


"A swarm consists of multiple Docker hosts which run in swarm mode and act as managers (to manage membership and delegation) and workers (which run swarm services). A given Docker host can be a manager, a worker, or perform both roles. When you create a service, you define its optimal state (number of replicas, network, and storage resources available to it, ports the service exposes to the outside world, and more). Docker works to maintain that desired state. For instance, if a worker node becomes unavailable, Docker schedules that node’s tasks on other nodes. A task is a running container that is part of a swarm service and managed by a swarm manager, as opposed to a standalone container."


A node in the Docker Swarm cluster can be a "manager" node or a "worker" node. Manager nodes "manage" tasks and perform cluster management functions. Worker node performs assigned tasks and report status back to the manager. If a host in swarm goes down, the manager node is responsible for allocating tasks and maintaining the "desired" state of swarm cluster.


Manager nodes maintain a database called the "Raft" database. The raft is synchronized between manager nodes in a swarm cluster and is used to manage workers, state of swarm among other things.



It is important to note that the manager node in a swarm cluster is a "worker' node with special permissions.


Let's check out how things work with Docker Swarm.


Note: If you intend to follow along, ensure you have created at least two VM's with docker installed in them.  In my case, I am using ubuntu-20 servers inside Hyper-V.
 

By default, the Docker swarm is inactive.



root@ubuntu20-docker1:/home/sathish# docker info
Client:
 Debug Mode: false

Server:
 Containers: 2
  Running: 0
  Paused: 0
  Stopped: 2
 Images: 6
 Server Version: 19.03.11
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc version:
 init version: fec3683
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 5.4.0-42-generic
 Operating System: Ubuntu Core 16
 OSType: linux
 Architecture: x86_64
 CPUs: 1
 Total Memory: 3.817GiB
 Name: ubuntu20-docker1
 ID: AASS:4KTX:B6TN:WHC3:4FUF:KAN5:D7Z5:QJPQ:ETJW:7CO3:6LQ4:ATS6
 Docker Root Dir: /var/snap/docker/common/var-lib-docker
 Debug Mode: false
 Username: wizardonwire
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No swap limit support

Let's enable swarm on first VM with "docker swarm init"



root@ubuntu20-docker1:/home/sathish# docker  swarm  init
Swarm initialized: current node (efhgyrygaplscg804h3wdgi82) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-1pwpou1cx13ir6oias5mnqysbrmd2bgqgyl1izox2f6nrxjp3i-3mro3i1khnywex0k2cs1unf16 172.28.147.39:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

On the second VM, I am going to join this swarm cluster by copy-pasting the "docker swarm join" command.



root@sathish-ubuntu2:/home/sathish# docker swarm join --token SWMTKN-1-1pwpou1cx13ir6oias5mnqysbrmd2bgqgyl1izox2f6nrxjp3i-3mro3i1khnywex0k2cs1unf16 172.28.147.39:2377
This node joined a swarm as a worker.

Nodes in a Swarm cluster can be seen with "docker node ls"



root@ubuntu20-docker1:/home/sathish# docker node  ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
d9b5th39uktl17srsh7t8y2wx     sathish-ubuntu2     Ready               Active                                  19.03.11
efhgyrygaplscg804h3wdgi82 *   ubuntu20-docker1    Ready               Active              Leader              19.03.11

As we can see from the above, we have 1 manager and 1 worker node in the swarm cluster.


"docker service create" is used to create a "service" in the docker swarm cluster, this command is similar to "docker container run". With "docker service create" we can specify the number of instances of service required. Manager nodes in the cluster will create and maintain the number of replicas we specify.


Let's create 6 replicas of httpd container and expose it on port 80




root@ubuntu20-docker1:/home/sathish# docker service create --replicas 6 --name web -p 80:80 httpd
2z19wj64sy37pb4zdg964ql7e
overall progress: 6 out of 6 tasks
1/6: running   [==================================================>]
2/6: running   [==================================================>]
3/6: running   [==================================================>]
4/6: running   [==================================================>]
5/6: running   [==================================================>]
6/6: running   [==================================================>]
verify: Service converged
root@ubuntu20-docker1:/home/sathish# docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
2z19wj64sy37        web                 replicated          6/6                 httpd:latest        *:80->80/tcp

The 6 replicas are distributed between 2 nodes in the cluster (3 on manager node-VM1 and 3 on worker node- VM2)



VM1

root@ubuntu20-docker1:/home/sathish# docker container ls
CONTAINER ID        IMAGE               COMMAND              CREATED             STATUS              PORTS               NAMES
778d7a58c47f        httpd:latest        "httpd-foreground"   40 seconds ago      Up 36 seconds       80/tcp              web.1.qvgpda49crnpitvm2rlv4iro8
d4ee4ba4f116        httpd:latest        "httpd-foreground"   40 seconds ago      Up 37 seconds       80/tcp              web.3.zo9olsog2aaf2tncekwfvt3si
71f5b072d0eb        httpd:latest        "httpd-foreground"   40 seconds ago      Up 36 seconds       80/tcp              web.5.riqbg58kuhcwyxkwenq8q0v57

VM2

root@sathish-ubuntu2:/home/sathish# docker container ls
CONTAINER ID        IMAGE               COMMAND              CREATED             STATUS              PORTS               NAMES
a12cfd234da1        httpd:latest        "httpd-foreground"   59 seconds ago      Up 56 seconds       80/tcp              web.2.zjqsd9c0b888ycggrltu3y2dk
438c62087888        httpd:latest        "httpd-foreground"   59 seconds ago      Up 55 seconds       80/tcp              web.6.d3lkzimfmm1hpjhgrm8z6x2iw
4bc31cfb251d        httpd:latest        "httpd-foreground"   59 seconds ago      Up 56 seconds       80/tcp              web.4.k6bl1ui3si9kgk66h896at28n

As I mentioned before, manger nodes in a swarm are responsible for maintaining the desired state- in our case 6 replicas. I am going to reboot VM2 and check if 3 replicas are recreated on VM1




root@ubuntu20-docker1:/home/sathish# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
d9b5th39uktl17srsh7t8y2wx     sathish-ubuntu2     Down                Active                                  19.03.11
efhgyrygaplscg804h3wdgi82 *   ubuntu20-docker1    Ready               Active              Leader              19.03.11
root@ubuntu20-docker1:/home/sathish# docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
2z19wj64sy37        web                 replicated          3/6                 httpd:latest        *:80->80/tcp
root@ubuntu20-docker1:/home/sathish# docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
2z19wj64sy37        web                 replicated          3/6                 httpd:latest        *:80->80/tcp
root@ubuntu20-docker1:/home/sathish# docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
2z19wj64sy37        web                 replicated          4/6                 httpd:latest        *:80->80/tcp
root@ubuntu20-docker1:/home/sathish# docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
2z19wj64sy37        web                 replicated          6/6                 httpd:latest        *:80->80/tcp

root@ubuntu20-docker1:/home/sathish# docker container ls
CONTAINER ID        IMAGE               COMMAND              CREATED              STATUS              PORTS               NAMES
d9571617924e        httpd:latest        "httpd-foreground"   About a minute ago   Up About a minute   80/tcp              web.2.k9ymuict2lrohpunmn8urm7mb
97488d952a90        httpd:latest        "httpd-foreground"   About a minute ago   Up About a minute   80/tcp              web.6.82ka87f7snlu01es2p2ohk5v2
06b415ae6c1a        httpd:latest        "httpd-foreground"   About a minute ago   Up About a minute   80/tcp              web.4.125w9c50pqkf9xq9vgegdc5kt
778d7a58c47f        httpd:latest        "httpd-foreground"   6 minutes ago        Up 6 minutes        80/tcp              web.1.qvgpda49crnpitvm2rlv4iro8
d4ee4ba4f116        httpd:latest        "httpd-foreground"   6 minutes ago        Up 6 minutes        80/tcp              web.3.zo9olsog2aaf2tncekwfvt3si
71f5b072d0eb        httpd:latest        "httpd-foreground"   6 minutes ago        Up 6 minutes        80/tcp              web.5.riqbg58kuhcwyxkwenq8q0v57

Once I brought down the worker node the manger node detected the event and brought up 3 more replicas.


That's all for today folks!! In the next article, I will talk about multi-host container networking and overlay driver. Thanks for your time







178 views0 comments

Comentários


bottom of page