Docker Swarm on Play with Docker

Play with Docker

If you are learning docker or if you want to try out a quick thing in docker, Play with Docker is the best place to do it. Play with Docker is a lab environment provided by Docker itself. To start playing around with docker, all you need to do is log in with your free docker id and start using it. You will get 4 hours of time to play around with it in a single session.

The best thing is you can add up to 5 nodes and try out all the cool stuff that docker provides us. In this article, we will learn how to create a swarm. How to add manager and worker nodes to the swarm. How to create and assign a service to the swarm etc.,

Objective

Our objective in this Play with Docker session is to create swarm of managers and workers and see how a service task is distributed across all nodes.

As part of our objective we are going to perform below steps:

  1. Initialize Swarm
  2. Add Manager Nodes to the Swarm
  3. Add Worker Nodes to the Swarm
  4. Promote a Worker Node as Manager
  5. Create Service with replicas
  6. Scaling the Service to increase the replicas
  7. Updating the Service to further increase the replicas

Play with Docker includes templates, using which you can create predefined swarm with a set of managers and nodes, like “3 Managers and 2 Workers”, “5 Managers and no workers” and “1 Manager and 1 Worker”. However, in this example, we are going to initialize the swarm, add manager and worker nodes using the docker commands all by ourselves.

Once the session is started, click on “+ ADD NEW INSTANCE” to create nodes. Let’s add 5 nodes which are the max number nodes that we can create using Play with Docker.

Once all 5 nodes are available, we’ll initialize the swarm. We’ll join node1 and node2 as managers. We’ll join node3 as a worker and promote it as a manager. Then we’ll proceed with adding node4 and node5 as worker nodes.

Then we create a service using docker/getting-started image with 3 replicas. Once this is done, we’ll see how docker efficiently make sure to keep 3 replicas available always even in case of shutdown of a node that was assigned to the service.

Lastly, we’ll see how we can scale the number of replicas. Now, let’s get started.

If you are copying the commands from this article to try things out, please make sure to change the IPs according to your session.

Initializing the Swarm

Let’s hit our first command to initialize the swarm using docker swarm init command.

[node1] (local) root@192.168.0.15 ~
docker swarm init --advertise-addr 192.168.0.15:2377 --listen-addr 192.168.0.15:2377
Swarm initialized: current node (lbnhzlhks1cahurf5x39iw6vk) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-5fuc7d0f1ofzm7p162bmijovuryv2wb1az6wy3jb0a27ghn0if-cp7j1632prhcgm9b6wqy219x6 192.168.0.15:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

This command not only initializes swarm but also makes current node as a leader manager.

Here, flag --advertise-addr tells the docker which IP it needs to advertise to other members of the swarm for API access and overlay networking. If the machine is got a single IP, technically you don’t have to specify this. But if the system has multiple IP addresses, --advertise-addr must be specified so that the correct address is chosen for inter-manager communication and overlay networking.

Using flag--listen-addr we are specifying the IP that needs to be used by the node to listen for inbound swarm manager traffic.

For both these flags we are using the current node IP address. When you run this command, you make sure to change it to your instance IP.

Native engine port is 2375 and the secure engine port is 2376, so we gave port 2377 as swarm port. Once you press enter we get the output saying that swarm has been initialized and this node has been marked as manager.

This also returned us an exact command that we need to run on a worker node that we want to join to this swarm. You can see that it also includes a token that must be used when we are adding a worker.

It also includes another instruction to add a manager to this swarm. Let’s copy that command and run it.

[node1] (local) root@192.168.0.15 ~
$ docker swarm join-token manager
To add a manager to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-5fuc7d0f1ofzm7p162bmijovuryv2wb1az6wy3jb0a27ghn0if-2xeu2vqppn1oj6kk6p3kj9lxj 192.168.0.15:2377

Now we got another command to add a manager to this swarm. However, the token in this command is different than the worker node token.

You can also use docker swarm join-token to get the command which joins a node as a worker in the swarm as below:

[node1] (local) root@192.168.0.15 ~
$ docker swarm join-token worker
To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-5fuc7d0f1ofzm7p162bmijovuryv2wb1az6wy3jb0a27ghn0if-cp7j1632prhcgm9b6wqy219x6 192.168.0.15:2377

Let’s see what docker info returns as of now:

[node1] (local) root@192.168.0.15 ~
$ docker info
Client:
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 0
 Server Version: 19.03.11
 Storage Driver: overlay2
  Backing Filesystem: xfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: active
  NodeID: lbnhzlhks1cahurf5x39iw6vk
  Is Manager: true
  ClusterID: yjfkmxwayrvazt9a2kcec7teu
  Managers: 1
  Nodes: 1
  Default Address Pool: 10.0.0.0/8  
  SubnetSize: 24
  Data Path Port: 4789
  Orchestration:
   Task History Retention Limit: 5
  Raft:
   Snapshot Interval: 10000
   Number of Old Snapshots to Retain: 0
   Heartbeat Tick: 1
   Election Tick: 10
  Dispatcher:
   Heartbeat Period: 5 seconds
  CA Configuration:
   Expiry Duration: 3 months
   Force Rotate: 0
  Autolock Managers: false
  Root Rotation In Progress: false
  Node Address: 192.168.0.15
  Manager Addresses:
   192.168.0.15:2377
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
 init version: fec3683
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 4.4.0-184-generic
 Operating System: Alpine Linux v3.12 (containerized)
 OSType: linux
 Architecture: x86_64
 CPUs: 8
 Total Memory: 31.4GiB
 Name: node1
 ID: TPPH:JE75:EBRD:INWN:E6C6:XSGZ:2FJV:EA4W:L7FB:AAFF:T5HQ:DXBD
 Docker Root Dir: /var/lib/docker
 Debug Mode: true
  File Descriptors: 38
  Goroutines: 155
  System Time: 2020-06-29T12:00:33.421784818Z
  EventsListeners: 0
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: true
 Insecure Registries:
  127.0.0.1
  127.0.0.0/8
 Live Restore Enabled: false
 Product License: Community Engine

This output says that swarm is active now. Since manager nodes are also worker nodes, the output shows that there are one manager and one worker.

Adding Nodes as Managers to the Swarm

Let us switch over to node2 which we would like to make it a manager and paste the manager token command from docker swarm join-token manager output instructions. Just make sure that the --advertise-addr and --listen-addr are the IPs of the current node.

[node2] (local) root@192.168.0.14 ~
$ docker swarm join --token SWMTKN-1-5fuc7d0f1ofzm7p162bmijovuryv2wb1az6wy3jb0a27ghn0if-2xeu2vqppn1oj6kk6p3kj9lxj 192.168.0.15:2377 --advertise-addr 192.168.0.14:2377 --listen-addr 192.168.0.14:2377
This node joined a swarm as a manager.

Let us switch over to node3 and make it a worker node.

[node3] (local) root@192.168.0.16 ~
$ docker swarm join --token SWMTKN-1-5fuc7d0f1ofzm7p162bmijovuryv2wb1az6wy3jb0a27ghn0if-cp7j1632prhcgm9b6wqy219x6 192.168.0.15:2377 --advertise-addr 192.168.0.16:2377 --listen-addr 192.168.0.16:2377
This node joined a swarm as a worker.

Let’s switch over to node1 and see how our swarm looks now.

[node1] (local) root@192.168.0.15 ~
$ docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
lbnhzlhks1cahurf5x39iw6vk *   node1               Ready               Active              Leader              19.03.11
kpo5tz14zmzf9lzk2jeqajk65     node2               Ready               Active              Reachable           19.03.11
jkn253f6b4n1m0dto1j0xzlmo     node3               Ready               Active                                  19.03.11

You can see that all 3 nodes are “Active” and are in “Ready” status. Also, it shows that Manager Status of node1 is Leader, that is, it’s the lead manager. The star next to its ID signifies that we are currently running this command from node1. Manager Status Reachable indicates that this node is a manager node and is reachable. Manager Status for non-manager nodes will be blank.

Promoting a worker node as a manager

Now let’s promote node3 as a manager using docker node promote command followed by node3 id.

[node1] (local) root@192.168.0.15 ~
$ docker node promote jkn253f6b4n1m0dto1j0xzlmo
Node jkn253f6b4n1m0dto1j0xzlmo promoted to a manager in the swarm.

Let’s look at the nodes in the swarm again.

[node1] (local) root@192.168.0.15 ~
$ docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
lbnhzlhks1cahurf5x39iw6vk *   node1               Ready               Active              Leader              19.03.11
kpo5tz14zmzf9lzk2jeqajk65     node2               Ready               Active              Reachable           19.03.11
jkn253f6b4n1m0dto1j0xzlmo     node3               Ready               Active              Reachable           19.03.11

It shows that node3’s Manager Status is Reachable, meaning node3 is a manager.

Adding Nodes as Workers to the Swarm

Let’s make node4 and node5 as worker nodes now.

Switch over to node4 and type the command which we get it from docker swarm join-token worker command.

[node4] (local) root@192.168.0.18 ~
$ docker swarm join --token SWMTKN-1-5fuc7d0f1ofzm7p162bmijovuryv2wb1az6wy3jb0a27ghn0if-cp7j1632prhcgm9b6wqy219x6 192.168.0.15:2377 --advertise-addr 192.168.0.18:2377 --listen-addr 192.168.0.18:2377
This node joined a swarm as a worker.

Switch over to node5 and join it as a worker

[node5] (local) root@192.168.0.17 ~
docker swarm join --token SWMTKN-1-5fuc7d0f1ofzm7p162bmijovuryv2wb1az6wy3jb0a27ghn0if-cp7j1632prhcgm9b6wqy219x6 192.168.0.15:2377 --advertise-addr 192.168.0.17:2377 --listen-addr 192.168.0.17:2377
This node joined a swarm as a worker.

Let’s switch over to node1 and see how the nodes in our swarm looks now.

[node1] (local) root@192.168.0.15 ~
$ docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
lbnhzlhks1cahurf5x39iw6vk *   node1               Ready               Active              Leader              19.03.11
kpo5tz14zmzf9lzk2jeqajk65     node2               Ready               Active              Reachable           19.03.11
jkn253f6b4n1m0dto1j0xzlmo     node3               Ready               Active              Reachable           19.03.11
wt2hydofwcgl532mn6q5smo4n     node4               Ready               Active                                  19.03.11
x7h8bto56lwrmokow1wpbfyl9     node5               Ready               Active                                  19.03.11

We now have 5 nodes in our swarm, having node1, node2 and node3 as Managers and node4 and node5 as workers.

Let’s run docker info and see how it shows up:

[node1] (local) root@192.168.0.15 ~
$ docker info
Client:
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 0
 Server Version: 19.03.11
 Storage Driver: overlay2
  Backing Filesystem: xfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: active
  NodeID: lbnhzlhks1cahurf5x39iw6vk
  Is Manager: true
  ClusterID: yjfkmxwayrvazt9a2kcec7teu
  Managers: 3
  Nodes: 5
  Default Address Pool: 10.0.0.0/8  
  SubnetSize: 24
  Data Path Port: 4789
  Orchestration:
   Task History Retention Limit: 5
  Raft:
   Snapshot Interval: 10000
   Number of Old Snapshots to Retain: 0
   Heartbeat Tick: 1
   Election Tick: 10
  Dispatcher:
   Heartbeat Period: 5 seconds
  CA Configuration:
   Expiry Duration: 3 months
   Force Rotate: 0
  Autolock Managers: false
  Root Rotation In Progress: false
  Node Address: 192.168.0.15
  Manager Addresses:
   192.168.0.14:2377
   192.168.0.15:2377
   192.168.0.16:2377
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
 init version: fec3683
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 4.4.0-184-generic
 Operating System: Alpine Linux v3.12 (containerized)
 OSType: linux
 Architecture: x86_64
 CPUs: 8
 Total Memory: 31.4GiB
 Name: node1
 ID: TPPH:JE75:EBRD:INWN:E6C6:XSGZ:2FJV:EA4W:L7FB:AAFF:T5HQ:DXBD
 Docker Root Dir: /var/lib/docker
 Debug Mode: true
  File Descriptors: 46
  Goroutines: 211
  System Time: 2020-06-29T12:18:05.081382343Z
  EventsListeners: 0
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: true
 Insecure Registries:
  127.0.0.1
  127.0.0.0/8
 Live Restore Enabled: false
 Product License: Community Engine

This docker info command shows that we have 5 nodes out of it 3 nodes are managers. It also lists out the IP and Port of all 3 manager nodes.

Creating the Service

Services is one of the major constructs that was introduced in Docker 1.12.

Services bring the concept called the desired state vs the actual state, where a form of reconciliation process runs in the background to make sure that actual matches desired.

Let’s create a service.

[node1] (local) root@192.168.0.15 ~
$ docker service create --name dockergs -p 80:80 --replicas 3 docker/getting-started:pwd
9qpjyalgi8q873sl37bb033j8
overall progress: 3 out of 3 tasks 
1/3: running   [==================================================>] 
2/3: running   [==================================================>] 
3/3: running   [==================================================>] 
verify: Service converged
  • Using create, we are telling Docker to create a service
  • Using --name, we are telling Docker to name the service as dockergs
  • Using option -p we want port 80 to be used internally and externally
  • Using option --replicas we are defining overall 3 replicas across all of the nodes

Any traffic hitting any node in the swarm on that port is going to reach our service

Let’s list out the services using docker service ls

[node1] (local) root@192.168.0.15 ~
$ docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                        PORTS
9qpjyalgi8q8        dockergs            replicated          3/3                 docker/getting-started:pwd   *:80->80/tcp

Let’s check how our service is spread across all the nodes using docker service ps command:

[node1] (local) root@192.168.0.15 ~
$ docker service ps dockergs
ID                  NAME                IMAGE                        NODE                DESIRED STATE       CURRENT STATE           ERROR               PORTS
xheajn5nfwo0        dockergs.1          docker/getting-started:pwd   node4               Running             Running 2 minutes ago                       
ciietu78rfye        dockergs.2          docker/getting-started:pwd   node5               Running             Running 2 minutes ago                       
t6frv2zu4ig7        dockergs.3          docker/getting-started:pwd   node3               Running             Running 2 minutes ago

You can see that service is assigned to the node1, node2 and node3.

If we click on the link “80” which is shown just beside the “OPEN PORT” button, it opens the application in a new tab.

Swarm mode is intelligent enough to reroute the request even if we try to access a node that is not assigned with the service task. For example, node2 is not assigned with the service task, but if you click on the port “80” link, it will still open the application page.

Scaling up the service using scale

Using docker service scale command we can either scale up or down the number of replicas served across all the nodes.

[node1] (local) root@192.168.0.15 ~
$ docker service scale dockergs=6
dockergs scaled to 6
overall progress: 6 out of 6 tasks 
1/6: running   [==================================================>] 
2/6: running   [==================================================>] 
3/6: running   [==================================================>] 
4/6: running   [==================================================>] 
5/6: running   [==================================================>] 
6/6: running   [==================================================>] 
verify: Service converged

The above command tells docker to scale up the replicas to 6 from the current 3. 6 replicas will be spread across 5 nodes, so we may expect one node to have 2 service tasks. Let’s check that by running docker service ps again.

[node1] (local) root@192.168.0.15 ~
$ docker service ps dockergs
ID                  NAME                IMAGE                        NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
xheajn5nfwo0        dockergs.1          docker/getting-started:pwd   node4               Running             Running 6 minutes ago                        
ciietu78rfye        dockergs.2          docker/getting-started:pwd   node5               Running             Running 6 minutes ago                        
t6frv2zu4ig7        dockergs.3          docker/getting-started:pwd   node3               Running             Running 6 minutes ago                        
pmm6pkxr8yyk        dockergs.4          docker/getting-started:pwd   node2               Running             Running 41 seconds ago                       
uen7sldfy6yp        dockergs.5          docker/getting-started:pwd   node1               Running             Running 41 seconds ago                       
zgs9dwzpjyex        dockergs.6          docker/getting-started:pwd   node1               Running             Running 41 seconds ago

Here, we can see that node1 is serving 2 tasks.

Let’s delete node5 and run docker service ls to see what happens to the replicas.

[node1] (local) root@192.168.0.15 ~
$ docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                        PORTS
9qpjyalgi8q8        dockergs            replicated          6/6                 docker/getting-started:pwd   *:80->80/tcp

Even after deleting the node which was assigned the service task, 6 of 6 replicas are still running. Let’s see how the tasks are allocated to nodes by running docker service ps command.

[node1] (local) root@192.168.0.15 ~
$ docker service ps dockergs
ID                  NAME                IMAGE                        NODE                DESIRED STATE       CURRENT STATE                ERROR               PORTS
xheajn5nfwo0        dockergs.1          docker/getting-started:pwd   node4               Running             Running 11 minutes ago                           
bppgi82kq1of        dockergs.2          docker/getting-started:pwd   node2               Running             Running about a minute ago                       
ciietu78rfye         \_ dockergs.2      docker/getting-started:pwd   node5               Shutdown            Running 11 minutes ago                           
t6frv2zu4ig7        dockergs.3          docker/getting-started:pwd   node3               Running             Running 11 minutes ago                           
pmm6pkxr8yyk        dockergs.4          docker/getting-started:pwd   node2               Running             Running 6 minutes ago                            
uen7sldfy6yp        dockergs.5          docker/getting-started:pwd   node1               Running             Running 6 minutes ago                            
zgs9dwzpjyex        dockergs.6          docker/getting-started:pwd   node1               Running             Running 6 minutes ago

You can see that even after deleting node5, its task has been now delegated to node2 and docker continues to maintain 6/6 replicas to match with the defined replica count.

Scaling up the service using update

You can also scale up the service replicas using the docker service update command.

[node1] (local) root@192.168.0.15 ~
$ docker service update --replicas 12 dockergs
dockergs
overall progress: 12 out of 12 tasks 
1/12: running   [==================================================>] 
2/12: running   [==================================================>] 
3/12: running   [==================================================>] 
4/12: running   [==================================================>] 
5/12: running   [==================================================>] 
6/12: running   [==================================================>] 
7/12: running   [==================================================>] 
8/12: running   [==================================================>] 
9/12: running   [==================================================>] 
10/12: running   [==================================================>] 
11/12: running   [==================================================>] 
12/12: running   [==================================================>] 
verify: Service converged

We’ve scaled up 12 replicas now. Let’s list out service details using docker service ls

[node1] (local) root@192.168.0.15 ~
$ docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                        PORTS
9qpjyalgi8q8        dockergs            replicated          12/12               docker/getting-started:pwd   *:80->80/tcp

Docker made all 12 replicas available. Let’s see how these tasks are assigned across all the nodes using docker service ps.

[node1] (local) root@192.168.0.15 ~
$ docker service ps dockergs
ID                  NAME                IMAGE                        NODE                DESIRED STATE       CURRENT STATE                ERROR               PORTS
xheajn5nfwo0        dockergs.1          docker/getting-started:pwd   node4               Running             Running 22 minutes ago                           
bppgi82kq1of        dockergs.2          docker/getting-started:pwd   node2               Running             Running 11 minutes ago                           
ciietu78rfye         \_ dockergs.2      docker/getting-started:pwd   node5               Shutdown            Running 22 minutes ago                           
t6frv2zu4ig7        dockergs.3          docker/getting-started:pwd   node3               Running             Running 22 minutes ago                           
pmm6pkxr8yyk        dockergs.4          docker/getting-started:pwd   node2               Running             Running 16 minutes ago                           
uen7sldfy6yp        dockergs.5          docker/getting-started:pwd   node1               Running             Running 16 minutes ago                           
zgs9dwzpjyex        dockergs.6          docker/getting-started:pwd   node1               Running             Running 16 minutes ago                           
rhqq8dvre1zs        dockergs.7          docker/getting-started:pwd   node2               Running             Running about a minute ago                       
pifkveiabi1g        dockergs.8          docker/getting-started:pwd   node1               Running             Running about a minute ago                       
awsd4qa2v2p0        dockergs.9          docker/getting-started:pwd   node4               Running             Running about a minute ago                       
a0eu3w2k61gv        dockergs.10         docker/getting-started:pwd   node4               Running             Running about a minute ago                       
ah2iz8ay195o        dockergs.11         docker/getting-started:pwd   node3               Running             Running about a minute ago                       
ng61z5vrkusk        dockergs.12         docker/getting-started:pwd   node3               Running             Running about a minute ago

As we see this output, Docker efficiently allocated the tasks evenly across all the nodes.

Adding a new node to swarm after service task allocation

Now let’s add a new instance by clicking on “+ ADD INSTANCE” and add it to the swarm.

Once you’ve done this, let’s list out allocated services using docker service ps.

[node1] (local) root@192.168.0.15 ~
$ docker service ps dockergs
ID                  NAME                IMAGE                        NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
xheajn5nfwo0        dockergs.1          docker/getting-started:pwd   node4               Running             Running 24 minutes ago                       
bppgi82kq1of        dockergs.2          docker/getting-started:pwd   node2               Running             Running 13 minutes ago                       
ciietu78rfye         \_ dockergs.2      docker/getting-started:pwd   node5               Shutdown            Running 24 minutes ago                       
t6frv2zu4ig7        dockergs.3          docker/getting-started:pwd   node3               Running             Running 24 minutes ago                       
pmm6pkxr8yyk        dockergs.4          docker/getting-started:pwd   node2               Running             Running 18 minutes ago                       
uen7sldfy6yp        dockergs.5          docker/getting-started:pwd   node1               Running             Running 18 minutes ago                       
zgs9dwzpjyex        dockergs.6          docker/getting-started:pwd   node1               Running             Running 18 minutes ago                       
rhqq8dvre1zs        dockergs.7          docker/getting-started:pwd   node2               Running             Running 3 minutes ago                        
pifkveiabi1g        dockergs.8          docker/getting-started:pwd   node1               Running             Running 3 minutes ago                        
awsd4qa2v2p0        dockergs.9          docker/getting-started:pwd   node4               Running             Running 3 minutes ago                        
a0eu3w2k61gv        dockergs.10         docker/getting-started:pwd   node4               Running             Running 3 minutes ago                        
ah2iz8ay195o        dockergs.11         docker/getting-started:pwd   node3               Running             Running 3 minutes ago                        
ng61z5vrkusk        dockergs.12         docker/getting-started:pwd   node3               Running             Running 3 minutes ago

That newly added node is not getting listed over here. Newly added nodes don’t get existing tasks automatically rebalanced across them.

Adding new nodes to the swarm or bringing the old ones back does not automatically re-balance existing running tasks.

References:

Docker Command-line Reference