Docker-Swarm
Docker 1.12 Swarm mode (opens new window)已经内嵌入 Docker 引擎,成为了 docker 子命令 docker swarm
。请注意与旧的 Docker Swarm
区分开来。
Swarm mode
内置 kv 存储功能,提供了众多的新特性,比如:具有容错能力的去中心化设计、内置服务发现、负载均衡、路由网格、动态伸缩、滚动更新、安全传输等。使得 Docker 原生的 Swarm
集群具备与 Mesos、Kubernetes 竞争的实力。
基本概念
Swarm
是使用 SwarmKit (opens new window)构建的 Docker 引擎内置(原生)的集群管理和编排工具。
使用 Swarm
集群之前需要了解以下几个概念。
节点
运行 Docker 的主机可以主动初始化一个 Swarm
集群或者加入一个已存在的 Swarm
集群,这样这个运行 Docker 的主机就成为一个 Swarm
集群的节点 (node
) 。
节点分为管理 (manager
) 节点和工作 (worker
) 节点。
管理节点用于 Swarm
集群的管理,docker swarm
命令基本只能在管理节点执行(节点退出集群命令 docker swarm leave
可以在工作节点执行)。一个 Swarm
集群可以有多个管理节点,但只有一个管理节点可以成为 leader
,leader
通过 raft
协议实现。
工作节点是任务执行节点,管理节点将服务 (service
) 下发至工作节点执行。管理节点默认也作为工作节点。你也可以通过配置让服务只运行在管理节点。
服务和任务
任务 (Task
)是 Swarm
中的最小的调度单位,目前来说就是一个单一的容器。
服务 (Services
) 是指一组任务的集合,服务定义了任务的属性。服务有两种模式:
-
replicated services
按照一定规则在各个工作节点上运行指定个数的任务。 -
global services
每个工作节点上运行一个任务
两种模式通过 docker service create
的 --mode
参数指定。
来自 Docker 官网的这张图片形象的展示了容器、任务、服务的关系。
创建 Swarm 集群
初始化集群
执行
docker swarm init
命令的节点自动成为管理节点。
在已经安装好 Docker 的主机上执行如下命令:
$ docker swarm init --advertise-addr 10.45.25.10
Swarm initialized: current node (dxn1zf6l61qsb1josjja83ngz) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c \
192.168.99.100:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
如果你的 Docker 主机有多个网卡,拥有多个 IP,必须使用 --advertise-addr
指定 IP。
增加工作节点
上一步我们初始化了一个 Swarm
集群,拥有了一个管理节点,下面我们继续在两个 Docker 主机中分别执行如下命令,创建工作节点并加入到集群中。
$ docker swarm join \
--token SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c \
10.45.25.10:2377
This node joined a swarm as a worker.
查看集群
经过上边的两步,我们已经拥有了一个最小的 Swarm
集群,包含一个管理节点和两个工作节点。
在管理节点使用 docker node ls
查看集群。
$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
03g1y59jwfg7cf99w4lt0f662 worker2 Ready Active
9j68exjopxe7wfl6yuxml7a7j worker1 Ready Active
dxn1zf6l61qsb1josjja83ngz * manager Ready Active Leader
部署服务
registry 本地注服务器
# 创建映射的目录:
[root@manage ~]# mkdir /registry
[root@worker1 ~]# mkdir /registry
[root@worker2 ~]# mkdir /registry
# 给节点打标签:
[root@manage ~]# docker node update --label-add registry_flag=registry_node worker1
worker1
[root@manage ~]# docker node update --label-add container_flag=worker_node worker1
worker1
[root@manage ~]# docker node update --label-add container_flag=worker_node worker2
worker2
# 部署registry service:通过标签将registry容器强制调度到worker1执行
[root@manage ~]# docker service create --replicas 1 --constraint 'node.labels.registry_flag == registry_node' -p 5000:5000 --mount type=bind,source=/registry,destination=/var/lib/registry --name registry registry
uxb3ebhuhyiaes12f9iuvwpv2
overall progress: 1 out of 1 tasks
1/1: running
verify: Service uxb3ebhuhyiaes12f9iuvwpv2 converged
[root@manage ~]#
# 查看service状态:
[root@manage ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
uxb3ebhuhyia registry replicated 1/1 registry:latest *:5000->5000/tcp
[root@manage ~]# docker service ps registry
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
imzv00gkm7id registry.1 registry:latest worker1 Running Running 2 minutes ago
[root@manage ~]#
# 在worker1和worker2验证:
[root@worker1 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6eceda7101d8 registry:latest "/entrypoint.sh /etc…" 37 seconds ago Up 36 seconds 5000/tcp registry.1.imzv00gkm7ideibx4kn6qj6p7
[root@worker1 ~]#
[root@worker2 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[root@worker2 ~]#
# 推送镜像到仓库:
[root@manage ~]# docker tag nginx:latest 10.45.25.10:5000/nginx:latest
[root@manage ~]# docker tag mysql:latest 10.45.25.10:5000/mysql:latest
[root@manage ~]# docker tag php:8.3-fpm-alpine3.19 10.45.25.10:5000/php:8.3-fpm-alpine3.19
[root@manage ~]# docker push 10.45.25.10:5000/nginx:latest
[root@manage ~]# docker push 10.45.25.10:5000/mysql:latest
[root@manage ~]# docker push 10.45.25.10:5000/php:8.3-fpm-alpine3.19
注:如果registry配置了登录用户、密码,则push、pull前,需docker login登录仓库
部署 WordPress
使用docker-compose.yaml部署wordpress
# 查看节点角色:
[root@manage ~]# docker node inspect worker1 | grep -i role
"Role": "worker",
[root@manage ~]# docker node inspect worker2 | grep -i role
"Role": "worker",
[root@manage ~]# docker node inspect manage | grep -i role
"Role": "manager",
[root@manage ~]#
注:role和labels平级,使用role判断:node.role 使用labels的键值判断: node.labels.xxx(key)
# 推行镜像到本地仓库:
[root@worker1 ~]# docker tag wordpress:latest 10.45.25.10:5000/wordpress:latest
[root@worker1 ~]# docker push 10.45.25.10:5000/wordpress:latest
# 编写docker-compose file:
[root@manage ~]# mkdir /worldpress-swarm
[root@manage ~]# cd /worldpress-swarm
[root@manage worldpress-swarm]# vim docker-compose.yml
[root@manage worldpress-swarm]# cat docker-compose.yml
services:
wordpress:
image: 10.45.25.10:5000/wordpress:latest
ports:
- 80:80
networks:
- overlay
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
deploy:
mode: replicated
replicas: 3
db:
image: 10.45.25.10:5000/mysql:latest
networks:
- overlay
volumes:
- db-data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root2024
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
deploy:
placement:
constraints: [node.role == manager]
volumes:
db-data:
networks:
overlay:
[root@manage worldpress-swarm]#
# 部署:docker stack deploy
[root@manage worldpress-swarm]# docker stack deploy -c docker-compose.yml wordpress
Since --detach=false was not specified, tasks will be created in the background.
In a future release, --detach=false will become the default.
Creating network wordpress_overlay
Creating service wordpress_wordpress
Creating service wordpress_db
[root@manage worldpress-swarm]#
# 查看stack状态
[root@manage worldpress-swarm]# docker stack ls
NAME SERVICES
wordpress 2
[root@manage worldpress-swarm]# docker stack ps wordpress
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
wka2kqn1b844 wordpress_db.1 10.45.25.10:5000/mysql:latest manage Running Running 2 minutes ago
annd6e6qilni wordpress_wordpress.1 10.45.25.10:5000/wordpress:latest worker2 Running Running about a minute ago
evnt40d7l7cn wordpress_wordpress.2 10.45.25.10:5000/wordpress:latest worker1 Running Running 2 minutes ago
xz1d4rdib3ef wordpress_wordpress.3 10.45.25.10:5000/wordpress:latest manage Running Running 2 minutes ago
[root@manage worldpress-swarm]#
管理敏感数据
Docker 目前已经提供了 secrets
管理功能,用户可以在 Swarm 集群中安全地管理密码、密钥证书等敏感数据,并允许在多个 Docker 容器实例之间共享访问指定的敏感数据。
注意:
secret
也可以在Docker Compose
中使用。
我们可以用 docker secret
命令来管理敏感信息。接下来我们在上面章节中创建好的 Swarm 集群中介绍该命令的使用。
[root@manage ~]# cat root_password.txt
root2024
[root@manage ~]# cat user_password.txt
user2024
# 创建secret
[root@manage ~]# docker secret create mysql_root_password root_password.txt
u9t7c1qd2fbwcwa0jmzqbwkjh
[root@manage ~]# docker secret create mysql_user_password user_password.txt
deqx9r86dpl6b1v035dlpqt7g
# 查看secret
[root@manage ~]# docker secret ls
ID NAME DRIVER CREATED UPDATED
u9t7c1qd2fbwcwa0jmzqbwkjh mysql_root_password 2 minutes ago 2 minutes ago
deqx9r86dpl6b1v035dlpqt7g mysql_user_password About a minute ago About a minute ago
[root@manage ~]#
创建mysql服务
如果你没有在
target
中显式的指定路径时,secret
默认通过tmpfs
文件系统挂载到容器的/run/secrets
目录中。
[root@manage ~]# docker network create -d overlay mysql_private
[root@manage ~]# docker service create \
--name mysql \
--replicas 1 \
--network mysql_private \
--mount type=volume,source=mydata,destination=/var/lib/mysql \
--secret source=mysql_root_password,target=mysql_root_password \
--secret source=mysql_user_password,target=mysql_user_password \
-e MYSQL_ROOT_PASSWORD_FILE="/run/secrets/mysql_root_password" \
-e MYSQL_PASSWORD_FILE="/run/secrets/mysql_user_password" \
-e MYSQL_USER="wordpress" \
-e MYSQL_DATABASE="wordpress" \
10.45.25.11:5000/mysql:latest
创建wordpress服务
docker service create \
--name wordpress \
--replicas 1 \
--network mysql_private \
--publish 30000:80 \
--mount type=volume,source=wpdata,destination=/var/www/html \
--secret source=mysql_user_password,target=wp_db_password,mode=0444 \
-e WORDPRESS_DB_USER="wordpress" \
-e WORDPRESS_DB_PASSWORD_FILE="/run/secrets/wp_db_password" \
-e WORDPRESS_DB_HOST="mysql:3306" \
-e WORDPRESS_DB_NAME="wordpress" \
10.45.25.11:5000/wordpress:latest
查看服务
[root@manage ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
t2rcn4qeubmv mysql replicated 1/1 10.45.25.11:5000/mysql:latest
qnpn3jw53bfk registry replicated 1/1 registry:latest *:5000->5000/tcp
zv5bndi9bm2g wordpress replicated 1/1 10.45.25.11:5000/wordpress:latest *:30000->80/tcp
[root@manage ~]#
浏览器访问
管理配置数据
如果使用-v挂载配置文件,则需手动将配置文件分发到各个节点,如果使用config创建配置文件,则会自动分发到需要的节点
在动态的、大规模的分布式集群上,管理和分发配置文件也是很重要的工作。传统的配置文件分发方式(如配置文件放入镜像中,设置环境变量,volume 动态挂载等)都降低了镜像的通用性。
创建 config
[root@manage ~]# cat default.conf
server {
listen 80;
listen [::]:80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
[root@manage ~]# docker config create nginx-default.conf default.conf
5nuibyhkchuhwq50rd4gz1pje
[root@manage ~]#
查看config
[root@manage ~]# docker config ls
ID NAME CREATED UPDATED
5nuibyhkchuhwq50rd4gz1pje nginx-default.conf 40 seconds ago 40 seconds ago
[root@manage ~]#
创建nginx服务
[root@manage ~]# docker pull nginx:1.26-alpine
[root@manage ~]# docker tag nginx:1.26-alpine 10.45.25.11:5000/nginx:1.26-alpine
[root@manage ~]# docker push 10.45.25.11:5000/nginx:1.26-alpine
[root@manage ~]# docker service create --name nginx-old \
--config source=nginx-default.conf,target=/etc/nginx/conf.d/default.conf \
--replicas 3 \
-p 8000:80 \
10.45.25.11:5000/nginx:1.26-alpine
查看服务状态
[root@manage ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
8n9sb43gi4a3 mysql replicated 1/1 10.45.25.11:5000/mysql:latest
bm77c4l4m12c nginx-old replicated 3/3 10.45.25.11:5000/nginx:1.26-alpine *:8000->80/tcp
qnpn3jw53bfk registry replicated 1/1 registry:latest *:5000->5000/tcp
mxm04712o74y wordpress replicated 1/1 10.45.25.11:5000/wordpress:latest *:30000->80/tcp
[root@manage ~]#
[root@manage ~]# docker service ps nginx-old
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
sxzo0vwzk5ya nginx-old.1 10.45.25.11:5000/nginx:1.26-alpine work2 Running Running about a minute ago
qc0sdo7atdt6 nginx-old.2 10.45.25.11:5000/nginx:1.26-alpine manage Running Running about a minute ago
yamiwc0tkc4u nginx-old.3 10.45.25.11:5000/nginx:1.26-alpine work1 Running Running about a minute ago
[root@manage ~]#
滚动升级
服务升级
# 上面我们部署了nginx-old的服务,现在出现了最新的nginx镜像,我们需要对之前的服务进行更新。
[root@manage ~]# docker service update --image 10.45.25.11:5000/nginx:latest nginx-old
nginx-old
overall progress: 3 out of 3 tasks
1/3: running
2/3: running
3/3: running
verify: Service nginx-old converged
[root@manage ~]#
[root@manage ~]# docker service ps nginx-old
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
tnekq07n38gt nginx-old.1 10.45.25.11:5000/nginx:latest work2 Running Running 58 seconds ago
sxzo0vwzk5ya \_ nginx-old.1 10.45.25.11:5000/nginx:1.26-alpine work2 Shutdown Shutdown 58 seconds ago
mwhe4x1nrw9l nginx-old.2 10.45.25.11:5000/nginx:latest manage Running Running about a minute ago
qc0sdo7atdt6 \_ nginx-old.2 10.45.25.11:5000/nginx:1.26-alpine manage Shutdown Shutdown about a minute ago
8f3laxla05em nginx-old.3 10.45.25.11:5000/nginx:latest work1 Running Running 54 seconds ago
yamiwc0tkc4u \_ nginx-old.3 10.45.25.11:5000/nginx:1.26-alpine work1 Shutdown Shutdown 54 seconds ago
[root@manage ~]#
服务回退
# 当新镜像出现问题时,回退
[root@manage ~]# docker service rollback nginx-old
nginx-old
rollback: manually requested rollback
overall progress: rolling back update: 3 out of 3 tasks
1/3: running
2/3: running
3/3: running
verify: Service nginx-old converged
[root@manage ~]#
[root@manage ~]# docker service ps nginx-old
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
uf3udtypyhm1 nginx-old.1 10.45.25.11:5000/nginx:1.26-alpine work2 Running Running 2 hours ago
tnekq07n38gt \_ nginx-old.1 10.45.25.11:5000/nginx:latest work2 Shutdown Shutdown 2 hours ago
sxzo0vwzk5ya \_ nginx-old.1 10.45.25.11:5000/nginx:1.26-alpine work2 Shutdown Shutdown 2 hours ago
ahszptew0mq3 nginx-old.2 10.45.25.11:5000/nginx:1.26-alpine manage Running Running 2 hours ago
mwhe4x1nrw9l \_ nginx-old.2 10.45.25.11:5000/nginx:latest manage Shutdown Shutdown 2 hours ago
qc0sdo7atdt6 \_ nginx-old.2 10.45.25.11:5000/nginx:1.26-alpine manage Shutdown Shutdown 2 hours ago
w7up99wmujo4 nginx-old.3 10.45.25.11:5000/nginx:1.26-alpine work1 Running Running 2 hours ago
8f3laxla05em \_ nginx-old.3 10.45.25.11:5000/nginx:latest work1 Shutdown Shutdown 2 hours ago
yamiwc0tkc4u \_ nginx-old.3 10.45.25.11:5000/nginx:1.26-alpine work1 Shutdown Shutdown 2 hours ago
[root@manage ~]#
标签:5000,nginx,manage,介绍,Swarm,10.45,Docker,root,docker
From: https://blog.csdn.net/weixin_52377412/article/details/142457361