Docker Swarm is a container orchestration and clustering tool to manage Docker hosts, and is a part of Docker Engine. It’s a native clustering tool provided by Docker which provides high-availability and high-performance for your application.
The primary objective of Docker Swarm is to group multiple Docker hosts into a single logical virtual server—this ensures availability and high performance for your application by distributing it over a number of Docker hosts instead of just one.
In this tutorial you will learn:
- What is Docker Swarm
- How to Configure Hosts
- How to Install and Run Docker Service
- How to Configure the Manager Node for Swarm Cluster Initialization
- How to Configure Worker Nodes to join the Swarm Cluster
- How to Verify the Swarm Cluster
- How to Deploy new Service on Swarm Cluster
Software Requirements and Conventions Used
Category | Requirements, Conventions or Software Version Used |
---|---|
System | Ubuntu 18.04 |
Software | Docker-CE 18.09 |
Other | Privileged access to your Linux system as root or via the sudo command. |
Conventions |
# – requires given linux commands to be executed with root privileges either directly as a root user or by use of sudo command$ – requires given linux commands to be executed as a regular non-privileged user |
Swarm Concept in Detail
The cluster management and orchestration features embedded in the Docker Engine are built using swarmkit.
A swarm consists of multiple Docker hosts which run in swarm mode and act as managers (which manage membership and delegation) and workers (which run swarm services). A given Docker host can be a manager, a worker, or perform both roles. When you create a service, you define its optimal state like number of replicas, network and storage resources available to it, ports the service exposes to the outside world etc. If a worker node becomes unavailable, Docker schedules that node’s tasks on other nodes. A task is a running container which is part of a swarm service and managed by a swarm manager.
One of the key advantages of swarm services over standalone containers is that you can modify a service’s configuration, including the networks and volumes it is connected to, without the need to manually restart the service. Docker will update the configuration, stop the service tasks with the out of date configuration, and create new ones matching the desired configuration.
When Docker is running in swarm mode, you can still run standalone containers on any of the Docker hosts participating in the swarm, as well as swarm services. A key difference between standalone containers and swarm services is that only swarm managers can manage a swarm, while standalone containers can be started on any daemon. Docker daemons can participate in a swarm as managers, workers, or both.
Configure the Docker hosts
Before installing the necessary Docker packages for the swarm cluster, we will configure the hosts file on all the Ubuntu nodes.
Manager Node – 192.168.1.103 (hostname - dockermanager) Worker Node1 – 192.168.1.107 (hostname – dockerworker1) Worker Node2 – 192.168.1.108 (hostname - dockerworker2)
Edit the /etc/hosts
file across all three nodes via gedit
or vim
and do the following changes:
192.168.1.103 dockermanager
192.168.1.107 dockerworker1
192.168.1.108 dockerworker2
After modifying with the above details in the hosts file, check the connectivity with ping
between all the nodes.
From Docker Manager Host
# ping dockerworker1 # ping 192.168.1.107
# ping dockerworker2 # ping 192.168.1.108
From Docker Worker Node 1
# ping dockermanager # ping 192.168.1.103
From Docker Worker Node 2
# ping dockermanager # ping 192.168.1.103
Install and Run Docker Service
To create the swarm cluster, we need to install docker on all server nodes. We will install docker-ce i.e. Docker Community Edition on all three Ubuntu machines.
Before you install Docker CE for the first time on a new host machine, you need to set up the Docker repository. Afterward, you can install and update Docker from the repository. Perform all the below steps across all three Ubuntu Nodes.
Update the apt package index:
# apt-get update
Install packages to allow apt to use a repository over HTTPS:
# apt-get install apt-transport-https ca-certificates curl software-properties-common -y
Add Docker’s official GPG key:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
Use the following command to set up the stable repository:
# add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
Again update the apt package:
# apt-get update
Install the latest version of Docker CE:
apt-get install docker-ce
After the installation is complete, start the docker service and enable it to launch every time at system boot.
# systemctl start docker # systemctl enable docker
To configure docker to run as a normal user or non-root user, run the following command:
# usermod -aG docker <username>
# usermod -aG docker manager # usermod -aG docker worker1 # usermod -aG docker worker2
Now, login as designated user and run the docker hello-world
to verify.
# su - manager $ docker run hello-world
Upon successful run it will give the below output
Configure the Manager Node for Swarm Cluster Initialization
In this step, we will create the swarm cluster of our nodes. To create the swarm cluster, we need to initialize the swarm mode on the ‘dockermanager’ node and then join the ‘dockerworker1’ and ‘dockerworker2’ node to the cluster.
Initialize the Docker Swarm mode by running the following docker command on the ‘dockermanager’ node.
docker swarm init --advertise-addr <manager node IP address>
$ docker swarm init --advertise-addr 192.168.1.103
‘join token’ has been generated by the ‘dockermanager’ which will be required to join the worker nodes to the cluster manager.
Configure Worker Nodes to join the Swarm Cluster
Now, to join the worker nodes to the swarm, we will run the docker swarm join command on all worker nodes which we received in the swarm initialization step:
$ docker swarm join --token SWMTKN-1-4htf3vnzmbhc88vxjyguipo91ihmutrxi2p1si2de4whaqylr6-3oed1hnttwkalur1ey7zkdp9l 192.168.1.103:2377
Verify the Swarm Cluster
To see the node status, so that we can determine if the nodes are active/available etc, from the manager node, list all the nodes in the swarm:
$ docker node ls
If at any time, you lost your join token, it can be retrieved by running the following command on the manager node for the manager token:
$ docker swarm join-token manager -q
The same way to retrieve the worker token run the following command on the manager node:
$ docker swarm join-token worker -q
Deploy new Service on Swarm Cluster
In this step, we will create and deploy our first service to the swarm cluster. The new service nginx web server will run on default http port 80, and then expose it to the port 8081 on the host machine. We will create this nginx service with 2 replicas, which means that there will be 2 containers of nginx running in our swarm. If any of these containers fail, they will be spawned again to have the desired number that we set on the replica option.
$ docker service create --name my-web1 --publish 8081:80 --replicas 2 nginx
After successful deployment of the service you can see the below output :
To check the newly created nginx service using below docker service commands.
$ docker service ls
docker service ps <service name>
$ docker service ps my-web1
If we need to check whether nginx service is working fine, either we can use the curl command or check in the browser on host machine for the nginx web server welcome page.
$ curl http://dockermanager:8081
In the browser on host machine we can access the Welcome Page of nginx
Now , if we need to scale the nginx service we will make 3 replicas and to do that run the following command on manager node:
$ docker service scale my-web1=3
To check the output after scaling we can use docker service ls
or docker service ps
command.
We can use docker service inspect
command to check the extended details of a deployed service on swarm. By default, this renders all results in a JSON array.
Conclusion
Docker has become an extremely popular way to configure, save, and share server environments using containers. Because of this, installing an application or even a large stack can often be as simple as running docker pull or docker run. Separating application functions into different containers also offers advantages in security and dependency management.