Getting Started With Docker Swarm
Table of Contents
Getting started with docker swarm#
Getting started with Docker Swarm involves setting up a cluster of Docker hosts to manage containerized applications. Docker Swarm is a native clustering and orchestration solution for Docker. Here’s a step-by-step guide to help you get started.
1. Install Docker#
If you haven’t already, install Docker on all the machines you want to include in your Swarm cluster. You can follow the official Docker installation guides for your specific operating system.
2. Initialize a Swarm#
Choose one machine to be the manager node, and use the following command to initialize the Swarm:
docker swarm init --advertise-addr <MANAGER_IP>
Example:
root@master-node:~# docker swarm init --advertise-addr 192.168.20.150
Swarm initialized: current node (6m77q8h2cmpibro3hr13wj7rx) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-56w57906xnlk9u5zqt9oxeknzk2ykivurxifa35hbuto7ej3gd-d8ogz1b14esj66zr4jtpfouop 192.168.20.150:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
3. Join Worker Nodes:#
After initializing the Swarm, you will receive a command with a token to join worker nodes to the Swarm. Run this command on each machine you want to add as a worker:
docker swarm join --token <TOKEN> <MANAGER_IP>:<PORT>
Replace with the token you received and <MANAGER_IP>: with the manager’s IP and port.
Example:
root@slave-node:~# docker swarm join --token SWMTKN-1-56w57906xnlk9u5zqt9oxeknzk2ykivurxifa35hbuto7ej3gd-d8ogz1b14esj66zr4jtpfouop 192.168.20.150:2377
This node joined a swarm as a worker.
3.5 Display nodes#
To get a list of nodes in the Docker Swarm use the command below:
docker node ls
Example:
root@master-node:~# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
6m77q8h2cmpibro3hr13wj7rx * master-node Ready Active Leader 20.10.25
vgzuyyqupr93qmbkijv7k7run slave-node Ready Active 20.10.25
This only works on the Swarm Manager node (Masternode):
root@slave-node:~# docker node ls
Error response from daemon: This node is not a swarm manager. Worker nodes can't be used to view or modify cluster state. Please run this command on a manager node or promote the current node to a manager.
4. Create Services:#
In Docker Swarm, you manage services instead of individual containers. A service defines how a container should run across the cluster. Create a service using the following command:
docker service create --name <SERVICE_NAME> --replicas <REPLICA_COUNT> -p <HOST_PORT>:<CONTAINER_PORT> <IMAGE_NAME>
Example:
root@master-node:~# docker service create --name test_container --replicas 4 -p 80:8080 nginxdemos/hello
edjj1z571kyptzk7w1mp5w482
overall progress: 4 out of 4 tasks
1/4: running
2/4: running
3/4: running
4/4: running
verify: Service converged
root@master-node:~# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3830bc4895ef nginxdemos/hello:latest "/docker-entrypoint.…" 18 seconds ago Up 17 seconds 80/tcp test_container.2.t30aazf4kt6rzicr67o3dfp6t
5262e0075243 nginxdemos/hello:latest "/docker-entrypoint.…" 18 seconds ago Up 17 seconds 80/tcp test_container.1.qa026o4setzcmkal4q6cow75x
root@slave-node:~# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
01e57804db10 nginxdemos/hello:latest "/docker-entrypoint.…" 9 seconds ago Up 8 seconds 80/tcp test_container.3.hv8kofrygy3v6sb1yjaslqdbn
f7cd51164094 nginxdemos/hello:latest "/docker-entrypoint.…" 9 seconds ago Up 7 seconds 80/tcp test_container.4.38dymmxgvwbqt8q6pkgxp1z82
5. Scale services:#
docker service scale <SERVICE_NAME>=<NEW_REPLICA_COUNT>
Example:
root@master-node:~# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
edjj1z571kyp test_container replicated 4/4 nginxdemos/hello:latest *:80->8080/tcp
root@master-node:~# docker service scale edjj1z571kyp=6
edjj1z571kyp scaled to 6
overall progress: 6 out of 6 tasks
1/6: running
2/6: running
3/6: running
4/6: running
5/6: running
6/6: running
verify: Service converged
root@master-node:~# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
05cd7ba9781c nginxdemos/hello:latest "/docker-entrypoint.…" 17 seconds ago Up 11 seconds 80/tcp test_container.5.4ospqwofu16ntrofvcniom862
3830bc4895ef nginxdemos/hello:latest "/docker-entrypoint.…" About a minute ago Up About a minute 80/tcp test_container.2.t30aazf4kt6rzicr67o3dfp6t
5262e0075243 nginxdemos/hello:latest "/docker-entrypoint.…" About a minute ago Up About a minute 80/tcp test_container.1.qa026o4setzcmkal4q6cow75x
root@slave-node:~# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
de068eff16a9 nginxdemos/hello:latest "/docker-entrypoint.…" 29 seconds ago Up 27 seconds 80/tcp test_container.6.yppy787v1zjxrkvljlykfrolr
01e57804db10 nginxdemos/hello:latest "/docker-entrypoint.…" About a minute ago Up About a minute 80/tcp test_container.3.hv8kofrygy3v6sb1yjaslqdbn
f7cd51164094 nginxdemos/hello:latest "/docker-entrypoint.…" About a minute ago Up About a minute 80/tcp test_container.4.38dymmxgvwbqt8q6pkgxp1z82
6. Inspect Services:#
To see the status of your services, use the following command:
docker service ps <SERVICE_NAME>
Example:
root@master-node:~# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
edjj1z571kyp test_container replicated 6/6 nginxdemos/hello:latest *:80->8080/tcp
root@master-node:~# docker service ps edjj1z571kyp
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
qa026o4setzc test_container.1 nginxdemos/hello:latest master-node Running Running 2 minutes ago
t30aazf4kt6r test_container.2 nginxdemos/hello:latest master-node Running Running 2 minutes ago
hv8kofrygy3v test_container.3 nginxdemos/hello:latest slave-node Running Running 2 minutes ago
38dymmxgvwbq test_container.4 nginxdemos/hello:latest slave-node Running Running 2 minutes ago
4ospqwofu16n test_container.5 nginxdemos/hello:latest master-node Running Running 58 seconds ago
yppy787v1zjx test_container.6 nginxdemos/hello:latest slave-node Running Running about a minute ago
7. Update Services#
If you need to update a service (e.g., change the image version), use the following command:
docker service update --image <NEW_IMAGE> <SERVICE_NAME>
8. Remote Services:#
To remove a service, use:
docker service rm <SERVICE_NAME>
Example:
root@master-node:~# docker service rm edjj1z571kyp
edjj1z571kyp
root@master-node:~# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
root@master-node:~# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
05cd7ba9781c nginxdemos/hello:latest "/docker-entrypoint.…" About a minute ago Exited (0) 3 seconds ago test_container.5.4ospqwofu16ntrofvcniom862
3830bc4895ef nginxdemos/hello:latest "/docker-entrypoint.…" 2 minutes ago Exited (0) 3 seconds ago test_container.2.t30aazf4kt6rzicr67o3dfp6t
9. Leave Swarm:#
If you want to remove a node from the Swarm, run:
docker swarm leave
On worker nodes, you can run docker swarm leave –force to force the node to leave.
Remember that Docker Swarm provides built-in load balancing, service discovery, and fault tolerance. However, for larger and more complex environments, you might consider other orchestration platforms like Kubernetes.
Example:
root@slave-node:~# docker swarm leave
Node left the swarm.
How to build a service for Docker Swarm?#
In Docker Swarm, you create and manage services to run your containerized applications. Here’s how you can build and deploy a service in Docker Swarm:
1. Create a Docker Image:#
Before you can create a service, you need to have a Docker image that defines your application. You can create a Docker image by writing a Dockerfile that describes how your application should be packaged and executed. Here’s a simplified example of a Dockerfile for a basic web application:
# Use an official Python runtime as a base image
FROM python:3.8
# Set the working directory in the container
WORKDIR /app
# Copy the current directory contents into the container
COPY . /app
# Install any necessary dependencies
RUN pip install -r requirements.txt
# Expose port 80 to the outside world
EXPOSE 80
# Define the command to run your application
CMD ["python", "app.py"]
2. Build the Docker Image#
Navigate to the directory containing your Dockerfile and application code, then run the following command to build the Docker image:
docker build -t my-app-image .
Replace my-app-image with a suitable name for your Docker image.
3. Push the Image to a Registry (Optional):#
If you want to deploy your service on multiple machines or share it with others, consider pushing the Docker image to a container registry (e.g., Docker Hub). This step is optional but can be useful for larger deployments.
docker login
docker tag my-app-image:latest username/my-app-image:latest
docker push username/my-app-image:latest
Replace username with your Docker Hub username and adjust the image name accordingly.
4. Create the Service:#
Once you have your Docker image ready, you can create a service using the docker service create command. Specify the image name, desired replicas, ports, and any other necessary options:
docker service create --name my-app-service --replicas 3 -p 8080:80 my-app-image
This command creates a service named my-app-service with 3 replicas, mapping port 8080 on the host to port 80 in the container, and using the my-app-image Docker image.
5. Inspect the Service:#
You can inspect the status of your service using the docker service ps command:
docker service ps my-app-service
6. Scale the Service:#
If you want to scale the service up or down, use the docker service scale command:
docker service scale my-app-service=5
7. Update the Service:#
If you make changes to your application code, Dockerfile, or configuration, you can update the service using the docker service update command:
docker service update --image new-image:tag my-app-service
8. Remove the Service:#
To remove the service when you’re done, use the docker service rm command:
docker service rm my-app-service
Remember that Docker Swarm takes care of load balancing, high availability, and service discovery for your application, making it easier to manage containerized applications across a cluster of machines.
How to remove a Manager Node from Docker Swarm and delete it?#
Removing a manager node from a Docker Swarm involves a few steps to ensure a smooth transition and to prevent disruption to the cluster. Here’s how you can remove a manager node from a Docker Swarm and delete it:
1. Demote the Manager Node (Optional but Recommended):#
It’s a good practice to demote the manager node you want to remove from its manager role before actually removing it from the Swarm. This ensures that the manager responsibilities are distributed across the remaining manager nodes.
On one of the other manager nodes, run the following command to demote the manager node you want to remove:
docker node demote <NODE_NAME>
2. Leave the Swarm:#
On the manager node you want to remove, run the following command to leave the Swarm:
docker swarm leave
If the node is still a manager, this will trigger a manager election to ensure the Swarm remains functional
3. Remove the Node from the Swarm:#
On another manager node or a worker node, run the following command to remove the node from the Swarm:
docker node rm <NODE_NAME>
Replace <NODE_NAME> with the name of the node you want to remove.
4. Delete the Node (Optional):#
If you want to completely delete the node from your infrastructure, you’ll need to perform OS-level cleanup and management. The steps can vary depending on your infrastructure and cloud provider:
-
On a Cloud Provider: If you’re using a cloud provider like AWS, Azure, or GCP, you’ll need to navigate to the respective console and delete the associated virtual machine instance.
-
On a Local Environment: If you’re running the Swarm on your local machines, you’ll need to log in to the node and perform the necessary cleanup, including stopping Docker services, removing containers, and deleting the machine itself.
Please note that removing a manager node from a Docker Swarm should be done with caution to ensure the stability and availability of your applications. It’s recommended to have a backup of any critical data and to ensure that the Swarm remains operational during the removal process.