环境准备

主机名(角色) IP
swarm-manager 172.16.100.20
swarm-node1 172.16.100.22
swarm-node2 172.16.100.22

加入swarm mode集群后不允许修改主机名

前提条件

  1. 安装Docker Engine 1.12或更新版本
  2. 允许2377的tcp端口用于集群管理交互
  3. 允许7946的TCP/UDP端口用于节点间的交互(容器网络发现)
  4. 允许4789的UDP端口用于overlay网络类型

创建swarm mode集群

[root@swarm-manager ~]#  docker swarm init --advertise-addr 172.16.100.20Swarm initialized: current node (sc21k9597zasfjaf6cfpuyvy6) is now a manager.To add a worker to this swarm, run the following command:    docker swarm join --token SWMTKN-1-10rnutvx6cpja7wv88k7ydywpvjjz1xsj88on00s43te740xca-1pwd9juzgpwnxlxne7p2g93va 172.16.100.20:2377To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
[root@swarm-manager ~]# docker info |grep -A5 SwarmSwarm: active NodeID: pg6fteetxsezu2ygyd3b0joye Is Manager: true ClusterID: yb5c85p7o054sxp1hb8ieqw43 Managers: 1 Nodes: 1[root@swarm-manager ~]# docker node lsID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUSpg6fteetxsezu2ygyd3b0joye *   swarm-manager       Ready               Active              Leader[root@swarm-manager ~]# netstat -lntp|grep dockertcp6       0      0 :::2377                 :::*                    LISTEN      1249/dockerdtcp6       0      0 :::7946                 :::*                    LISTEN      1249/dockerd

添加节点到swarm mode集群

swarm mode集群有manager和worker节点,可通过docker swarm join-token [manager|worker]命令获取节点添加命令

[root@swarm-manager ~]# docker swarm join-token managerTo add a manager to this swarm, run the following command:    docker swarm join --token SWMTKN-1-5vp5axn28a2cbrtzxlirktbhpnluayacuw81zqacooe3ooe2o3-6ys543fe4zkeagkoaacgaqe3e 172.16.100.20:2377[root@swarm-manager ~]# docker swarm join-token workerTo add a worker to this swarm, run the following command:    docker swarm join --token SWMTKN-1-5vp5axn28a2cbrtzxlirktbhpnluayacuw81zqacooe3ooe2o3-64gphy50682jszwc19nn0onpc 172.16.100.20:2377

分别在node1和node2节点上执行如下的docker swarm join命令添加worker节点

[root@swarm-node1 ~]# docker swarm join --token SWMTKN-1-5vp5axn28a2cbrtzxlirktbhpnluayacuw81zqacooe3ooe2o3-64gphy50682jszwc19nn0onpc 172.16.100.20:2377

管理swarm mode集群节点

[root@swarm-manager ~]# docker node lsID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUSlati2179dcwgkvvkc0qcieoim     swarm-node2         Ready               Activepg6fteetxsezu2ygyd3b0joye *   swarm-manager       Ready               Active              Leadery83k6khc3vxmch1qd3j8kl4ak     swarm-node1         Ready               Active

升/降级节点

  • 升级worker节点为manager节点
    [root@swarm-manager ~]# docker node promote swarm-node1 swarm-node2Node swarm-node1 promoted to a manager in the swarm.Node swarm-node2 promoted to a manager in the swarm.# docker node lsID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUSlati2179dcwgkvvkc0qcieoim     swarm-node2         Ready               Active              Reachablepg6fteetxsezu2ygyd3b0joye *   swarm-manager       Ready               Active              Leadery83k6khc3vxmch1qd3j8kl4ak     swarm-node1         Ready               Active              Reachable
  • 降级manager节点为worker节点
    [root@swarm-manager ~]# docker node demote swarm-node1 swarm-node2Manager swarm-node1 demoted in the swarm.Manager swarm-node2 demoted in the swarm.[root@swarm-manager ~]# docker node lsID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUSlati2179dcwgkvvkc0qcieoim     swarm-node2         Ready               Activepg6fteetxsezu2ygyd3b0joye *   swarm-manager       Ready               Active              Leadery83k6khc3vxmch1qd3j8kl4ak     swarm-node1         Ready               Active

    移除节点

    移除节点时需要先在worker节点上执行docker swarm leave命令将节点状态设为Down后,在manager节点上执行docker node rm <node-name>移除。如果要移除manager节点,不建议使用--force强制移除,而应该先进行降级后再移除。

[root@swarm-manager ~]# docker swarm leaveError response from daemon: You are attempting to leave the swarm on a node that is participating as a manager. Removing the last manager erases all current state of the swarm. Use `--force` to ignore this message.[root@swarm-manager ~]# docker node rm swarm-node1Error response from daemon: rpc error: code = 9 desc = node y83k6khc3vxmch1qd3j8kl4ak is not down and can't be removed
[root@swarm-node2 ~]# docker swarm leaveNode left the swarm.[root@swarm-manager ~]# docker node lsID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUSlati2179dcwgkvvkc0qcieoim     swarm-node2         Down                Activepg6fteetxsezu2ygyd3b0joye *   swarm-manager       Ready               Active              Leadery83k6khc3vxmch1qd3j8kl4ak     swarm-node1         Ready               Active[root@swarm-manager ~]# docker node rm swarm-node2swarm-node2[root@swarm-manager ~]# docker node lsID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUSpg6fteetxsezu2ygyd3b0joye *   swarm-manager       Ready               Active              Leadery83k6khc3vxmch1qd3j8kl4ak     swarm-node1         Ready               Active