确保自己的Centos环境中已经安装好了Docker,Docker-compose相关的软件
安装cerebro、es、kibana编写docker-compose.yml文件,部署单机环境
version: '3.5'
services:
cerebro:
image: lmenezes/cerebro:latest
container_name: cerebro
ports:
- "9300:9300"
command:
- -Dhosts.0.host=http://elasticsearch:9200
networks:
- es7net
kibana:
image: kibana:7.1.0
container_name: kibana7
environment:
I18N_LOCALE: "zh-CN"
XPACK_GRAPH_ENABLED: true
TIMELION_ENABLED: true
XPACK_MONITORING_COLLECTION_ENABLED: true
ports:
- "5601:5601"
networks:
- es7net
elasticsearch:
image: elasticsearch:7.1.0
container_name: es7
environment:
cluster.name: geektime
node.name: es7
bootstrap.memory_lock: true
ES_JAVA_OPTS: "-Xms512m -Xmx512m"
discovery.seed_hosts: es7
cluster.initial_master_nodes: es7
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- ./es7data/:/usr/share/elasticsearch/data/
ports:
- 9200:9200
networks:
- es7net
networks:
es7net:
driver: bridge
创建文件夹并执行脚本,下载镜像文件
mkdir -p ~/elastic/es7data
docker-compose up -d
验证是否启动成功
sudo docker images
sudo docker ps
安装cerebro、es、kibana编写docker-compose.yml文件,部署集群脚本
version: '3.5'
services:
cerebro:
image: lmenezes/cerebro:latest
container_name: cerebro
ports:
- "9600:9000"
command:
- -Dhosts.0.host=http://elasticsearch:9200
networks:
- es7net
kibana:
image: kibana:7.1.0
container_name: kibana7
environment:
I18N_LOCALE: "zh-CN"
XPACK_GRAPH_ENABLED: true
TIMELION_ENABLED: true
XPACK_MONITORING_COLLECTION_ENABLED: true
ports:
- "5601:5601"
networks:
- es7net
elasticsearch:
image: elasticsearch:7.1.0
container_name: es7_hot
environment:
cluster.name: geektime
node.name: es7_hot
node.attr.box_type: hot
bootstrap.memory_lock: true
ES_JAVA_OPTS: "-Xms512m -Xmx512m"
discovery.seed_hosts: es7_hot,es7_warm,es7_cold
cluster.initial_master_nodes: es7_hot,es7_warm,es7_cold
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- ./es7data_hot/:/usr/share/elasticsearch/data/
ports:
- 9200:9200
networks:
- es7net
elasticsearch2:
image: elasticsearch:7.1.0
container_name: es7_warm
environment:
cluster.name: geektime
node.name: es7_warm
node.attr.box_type: warm
bootstrap.memory_lock: true
ES_JAVA_OPTS: "-Xms512m -Xmx512m"
discovery.seed_hosts: es7_hot,es7_warm,es7_cold
cluster.initial_master_nodes: es7_hot,es7_warm,es7_cold
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- ./es7data_warm/:/usr/share/elasticsearch/data/
networks:
- es7net
elasticsearch3:
image: elasticsearch:7.1.0
container_name: es7_cold
environment:
cluster.name: geektime
node.name: es7_cold
node.attr.box_type: cold
bootstrap.memory_lock: true
ES_JAVA_OPTS: "-Xms512m -Xmx512m"
discovery.seed_hosts: es7_hot,es7_warm,es7_cold
cluster.initial_master_nodes: es7_hot,es7_warm,es7_cold
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- ./es7data_cold/:/usr/share/elasticsearch/data/
networks:
- es7net
volumes:
es7data_hot:
driver: local
es7data_warm:
driver: local
es7data_cold:
driver: local
networks:
es7net:
driver: bridge
创建文件夹并执行脚本
sudo mkdir es7data_hot es7data_warm es7data_cold
sudo docker compose up -d
验证是否启动成功
安装cerebro、es、kibana、logstash编写docker-compose.yml文件
version: '3.5'
services:
cerebro:
image: lmenezes/cerebro:latest
container_name: cerebro
ports:
- "9600:9000"
command:
- -Dhosts.0.host=http://elasticsearch:9200
networks:
- es7net
kibana:
image: kibana:7.1.0
container_name: kibana7
environment:
I18N_LOCALE: "zh-CN"
XPACK_GRAPH_ENABLED: true
TIMELION_ENABLED: true
XPACK_MONITORING_COLLECTION_ENABLED: true
ports:
- "5601:5601"
networks:
- es7net
elasticsearch:
image: elasticsearch:7.1.0
container_name: es7_hot
environment:
cluster.name: geektime
node.name: es7_hot
node.attr.box_type: hot
bootstrap.memory_lock: true
ES_JAVA_OPTS: "-Xms512m -Xmx512m"
discovery.seed_hosts: es7_hot,es7_warm,es7_cold
cluster.initial_master_nodes: es7_hot,es7_warm,es7_cold
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- ./es7data_hot/:/usr/share/elasticsearch/data/
ports:
- 9200:9200
networks:
- es7net
elasticsearch2:
image: elasticsearch:7.1.0
container_name: es7_warm
environment:
cluster.name: geektime
node.name: es7_warm
node.attr.box_type: warm
bootstrap.memory_lock: true
ES_JAVA_OPTS: "-Xms512m -Xmx512m"
discovery.seed_hosts: es7_hot,es7_warm,es7_cold
cluster.initial_master_nodes: es7_hot,es7_warm,es7_cold
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- ./es7data_warm/:/usr/share/elasticsearch/data/
networks:
- es7net
elasticsearch3:
image: elasticsearch:7.1.0
container_name: es7_cold
environment:
cluster.name: geektime
node.name: es7_cold
node.attr.box_type: cold
bootstrap.memory_lock: true
ES_JAVA_OPTS: "-Xms512m -Xmx512m"
discovery.seed_hosts: es7_hot,es7_warm,es7_cold
cluster.initial_master_nodes: es7_hot,es7_warm,es7_cold
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- ./es7data_cold/:/usr/share/elasticsearch/data/
networks:
- es7net
logstash:
image: logstash:7.1.0
container_name: logstash
volumes:
- ./logstash/logstash.conf:/usr/share/logstash/config/logstash.conf
- ./logstash/data/:/usr/share/logstash/data/
depends_on:
- elasticsearch
- kibana
- elasticsearch2
- elasticsearch3
command: bash -c "logstash -f /usr/share/logstash/config/logstash.conf"
ports:
- 4560:4560
networks:
- es7net
volumes:
es7data_hot:
driver: local
es7data_warm:
driver: local
es7data_cold:
driver: local
logstash:
driver: local
networks:
es7net:
driver: bridge
创建文件夹和文件,并执行脚本
- 创建文件夹和编写logstash启动脚本
sudo mkdir -p logstash/data
cd logstash
vi logstash.conf
input {
file {
path => "/usr/share/logstash/data/movies.csv"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
csv {
separator => ","
columns => ["id","content","genre"]
}
mutate {
split => { "genre" => "|"}
remove_field => ["path", "host", "@timestamp","message"]
}
mutate {
split => ["content", "("]
add_field => {"title" => "%{[content][0]}"}
add_field => {"year" => "%{[content][1]}"}
}
mutate {
convert => {
"year" => "integer"
}
strip => ["title"]
remove_field => ["path", "host", "@timestamp","message","content"]
}
}
output {
elasticsearch {
hosts => ["http://elasticsearch:9200"]
index => "movies"
document_id => "%{id}"
}
stdout {}
}
- 下载需要logstash导入到ES的数据,选择的是Movielens(https://grouplens.org/datasets/movielens/)网站的最小数据。并FTP上传到logstash的data目录下
验证是否启动成功
异常信息处理
- max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
- 解决方法:修改虚拟机配置,将该值调整大
sudo vi /etc/sysctl.conf
## 追加后边的参数
vm.max_map_count=655360
## 查看是否成功
sudo sysctl -p
- memory locking requested for elasticsearch process but memory is not locked
- 解决方法:在docker-compose.yml文件中配置ulimits参数信息
elasticsearch:
image: elasticsearch:7.1.0
container_name: es7
environment:
cluster.name: geektime
node.name: es7
bootstrap.memory_lock: true
ES_JAVA_OPTS: "-Xms512m -Xmx512m"
discovery.seed_hosts: es7
cluster.initial_master_nodes: es7
################添加如下环境变量信息################
ulimits:
memlock:
soft: -1
hard: -1
- the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured
- 解决方法:在docker-compose.yml文件中配置这些环境变量
elasticsearch:
image: elasticsearch:7.1.0
container_name: es7
environment:
cluster.name: geektime
node.name: es7
bootstrap.memory_lock: true
ES_JAVA_OPTS: "-Xms512m -Xmx512m"
#########添加如下环境变量信息#################
discovery.seed_hosts: es7
cluster.initial_master_nodes: es7
标签:es7,Kinbana,name,Cerebo,hot,warm,elasticsearch,Elasticsearch,cold
From: https://www.cnblogs.com/tenic/p/16795828.html