docker swarm 部署ES集群
0. 环境准备
修改系统配置,在所有主机中,编辑 /etc/sysctl.conf,追加以下内容:
vm.max_map_count=262144
保存后,执行sysctl -p
1. docker-compose文件准备
docker-compose-es-cluster.yml
version: '3.3'
services:
kibana:
image: docker.elastic.co/kibana/kibana:7.6.2
environment:
- ELASTICSEARCH_URL=http://es_master:9200
- ELASTICSEARCH_HOSTS=http://es_master:9200
- ELASTICSEARCH_USERNAME=elastic
- ELASTICSEARCH_PASSWORD=vsUZGKNvjWRtTKPmDG
ports:
- 5601:5601
networks:
- elastic
deploy:
mode: replicated
replicas: 1
resources:
limits:
memory: 800M
placement:
constraints:
- node.role==manager
es_master:
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.8
environment:
- node.name=es_master
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es_node
- cluster.initial_master_nodes=es_master,es_node
- network.host=0
- network.publish_host=_eth0_
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- path.repo=/usr/share/elasticsearch/backups
- xpack.security.enabled=true
- xpack.security.audit.enabled=false
- xpack.security.transport.ssl.enabled=false
- ELASTIC_PASSWORD=vsUZGKNvjWRtTKPmDG
volumes:
- es_master_data:/usr/share/elasticsearch/data
- es_master_logs:/usr/share/elasticsearch/logs
ports:
- 9200:9200
- 9300:9300
networks:
- elastic
deploy:
mode: replicated
replicas: 1
resources:
limits: # es_maseter 资源使用上限
cpus: "0.50"
memory: 1G
reservations: # es_maseter 随时可以使用的资源
cpus: "0.25"
memory: 1G
placement:
constraints:
- node.role==manager # 部署位置
es_node:
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.8
environment:
- node.name=es_node
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es_master
- cluster.initial_master_nodes=es_node,es_master
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- path.repo=/usr/share/elasticsearch/backups
- xpack.security.enabled=true
- ELASTIC_PASSWORD=vsUZGKNvjWRtTKPmDG
volumes:
- es_node_data:/usr/share/elasticsearch/data
- es_node_logs:/usr/share/elasticsearch/logs
networks:
- elastic
deploy:
mode: replicated
replicas: 2
resources:
limits:
cpus: "0.50"
memory: 1G
reservations:
cpus: "0.25"
memory: 1G
placement:
constraints:
- node.role==manager
volumes:
es_master_data:
driver: local
es_master_logs:
driver: local
es_node_data:
driver: local
es_node_logs:
driver: local
networks:
elastic:
driver: overlay # 必须使用swarm类型的网络
2. 部署服务
部署服务使用 docker stack deploy
,其中 -c
参数指定 compose 文件名。
$ docker stack deploy -c docker-compose-es-cluster.yml es
验证ES服务
:打开浏览器输入 任一节点IP:9200
即可看到各节点运行状态。如下图所示:
此时ES已成功启动,但是还需要验证集群是否搭建成功。
验证集群
:打开浏览器输入 任一节点IP:9200/_cluster/health
即可看到集群状态。如下图所示:
验证Kibana
:打开浏览器输入 任一节点IP:5601
能看到kibana的登录页则表示成功。
3. 查看服务
$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
ou62f7ff0ast es_es_master replicated 1/1 docker.elastic.co/elasticsearch/elasticsearch:7.17.8 *:9200->9200/tcp, *:9300->9300/tcp
6pyqh40a2jox es_es_node replicated 2/2 docker.elastic.co/elasticsearch/elasticsearch:7.17.8
kpc66kphr4mb es_kibana replicated 1/1 docker.elastic.co/kibana/kibana:7.6.2 *:5601->5601/tcp
4. 常见问题解决
1. 找不到master节点
报错信息摘要
:master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes ...
解决方案
:这种情况一般可以在服务down掉之后,删除相关的volume,然后重新部署。但是生产环境慎用down操作。服务启动之后如果需要重启可以通过重新执行docker stack deploy -c docker-compose-es-cluster.yml es
来使得配置生效。