一、Elasticsearch集群
1.1内核参数优化:
# vim /etc/sysctl.conf vm.max_map_count=262144
1.2:主机名解析
192.168.84.132 es1 es1.example.com 192.168.84.133 es2 es2.example.com 192.168.84.134 es3 es3.example.com
1.3:资源limit优化:
# vim /etc/security/limits.conf root soft core unlimited root hard core unlimited root soft nproc 1000000 root hard nproc 1000000 root soft nofile 1000000 root hard nofile 1000000 root soft memlock 32000 root hard memlock 32000 root soft msgqueue 8192000 root hard msgqueue 8192000 * soft core unlimited * hard core unlimited * soft nproc 1000000 * hard nproc 1000000 * soft nofile 1000000 * hard nofile 1000000 * soft memlock 32000 * hard memlock 32000 * soft msgqueue 8192000 * hard msgqueue 8192000
1.4:创建普通⽤户运⾏环境:
groupadd -g 2888 elasticsearch && useradd -u 2888 -g 2888 -r -m -s /bin/bash elasticsearch mkdir /data/esdata /data/eslogs /apps -pv chown elasticsearch.elasticsearch /data /apps/ -R
reboot
1.5:部署elasticsearch集群
tar xvf elasticsearch-8.5.1-linux-x86_64.tar.gz ln -sv /apps/elasticsearch-8.5.1 /apps/elasticsearch
1.5.1:xpack认证签发环境
chown elasticsearch.elasticsearch /apps/ -R root@es1:~# su - elasticsearch elasticsearch@es1:~$ cd /apps/elasticsearch /apps/elasticsearch$ vim instances.yml instances: - name: "es1.example.com" ip: - "192.168.84.132" - name: "es2.example.com" ip: - "192.168.84.133" - name: "es3.example.com" ip: - "192.168.84.134" #⽣成CA私钥,默认名字为elastic-stack-ca.p12 /apps/elasticsearch$ bin/elasticsearch-certutil ca #⽣产CA公钥,默认名称为elastic-certificates.p12 /apps/elasticsearch$ bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 #签发elasticsearch集群主机证书: elasticsearch@es1:/apps/elasticsearch$ ./bin/elasticsearch-certutil cert --silent --in instances.yml --out certs.zip --pass nuo --ca elastic-stack-ca.p12 #指定证书密码为nuo证书分发: #本机(node1)证书: elasticsearch@es1:/apps/elasticsearch$ unzip certs.zip elasticsearch@es1:/apps/elasticsearch$ mkdir config/certs elasticsearch@es1:/apps/elasticsearch$ cp -rp es1.example.com/es1.example.com.p12 config/certs/ node2证书: passwd elasticsearch 12345678 elasticsearch@es2:/apps/elasticsearch$ mkdir config/certs elasticsearch@es1:/apps/elasticsearch$ scp -rp es2.example.com 192.168.84.133:/apps/elasticsearch/config/certs/ node3证书: passwd elasticsearch 12345678 elasticsearch@es3:/apps/elasticsearch$ mkdir config/certs elasticsearch@es1:/apps/elasticsearch$ scp -rp es3.example.com 192.168.84.134:/apps/elasticsearch/config/certs/ #⽣成 keystore ⽂件(keystore是保存了证书密码的认证⽂件nuo) elasticsearch@es1:/apps/elasticsearch$ ./bin/elasticsearch-keystore create #创建keystore⽂件 Created elasticsearch keystore in /apps/elasticsearch/config/elasticsearch.keystore elasticsearch@es1:/apps/elasticsearch$ ./bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password Enter value for xpack.security.transport.ssl.keystore.secure_password: #nuo elasticsearch@es1:/apps/elasticsearch$ ./bin/elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password Enter value for xpack.security.transport.ssl.truststore.secure_password: #nuo 分发认证⽂件: scp /apps/elasticsearch/config/elasticsearch.keystore 192.168.84.133:/apps/elasticsearch/config/elasticsearch.keystore
注:上面一直enter,下一步即可
scp /apps/elasticsearch/config/elasticsearch.keystore 192.168.84.134:/apps/elasticsearch/config/elasticsearch.keystore
1.5.2:编辑配置⽂件
1.5.2.1: node1:
# ======================== Elasticsearch Configuration ========================= # # NOTE: Elasticsearch comes with reasonable defaults for most settings. # Before you set out to tweak and tune the configuration, make sure you # understand what are you trying to accomplish and the consequences. # # The primary way of configuring a node is via this file. This template lists # the most important settings you may want to configure for a production cluster. # # Please consult the documentation for further information on configuration options: # https://www.elastic.co/guide/en/elasticsearch/reference/index.html # # ---------------------------------- Cluster ----------------------------------- # # Use a descriptive name for your cluster: # cluster.name: nuo-es-cluster # # ------------------------------------ Node ------------------------------------ # # Use a descriptive name for the node: # node.name: node-1 # # Add custom attributes to the node: # #node.attr.rack: r1 # # ----------------------------------- Paths ------------------------------------ # # Path to directory where to store the data (separate multiple locations by comma): # path.data: /data/esdata # # Path to log files: # path.logs: /data/eslogs # # ----------------------------------- Memory ----------------------------------- # # Lock the memory on startup: # #bootstrap.memory_lock: true # # Make sure that the heap size is set to about half the memory available # on the system and that the owner of the process is allowed to use this # limit. # # Elasticsearch performs poorly when the system is swapping the memory. # # ---------------------------------- Network ----------------------------------- # # By default Elasticsearch is only accessible on localhost. Set a different # address here to expose this node on the network: # network.host: 0.0.0.0 # # By default Elasticsearch listens for HTTP traffic on the first free port it # finds starting at 9200. Set a specific HTTP port here: # http.port: 9200 # # For more information, consult the network module documentation. # # --------------------------------- Discovery ---------------------------------- # # Pass an initial list of hosts to perform discovery when this node is started: # The default list of hosts is ["127.0.0.1", "[::1]"] # discovery.seed_hosts: ["192.168.84.132", "192.168.84.133","192.168.84.134"] # # Bootstrap the cluster using an initial set of master-eligible nodes: # cluster.initial_master_nodes: ["192.168.84.132", "192.168.84.133","192.168.84.134"] # # For more information, consult the discovery and cluster formation module documentation. # # --------------------------------- Readiness ---------------------------------- # # Enable an unauthenticated TCP readiness endpoint on localhost # #readiness.port: 9399 # # ---------------------------------- Various ----------------------------------- # # Allow wildcard deletion of indices: # action.destructive_requires_name: true action.destructive_requires_name: true xpack.security.enabled: true xpack.security.transport.ssl.enabled: true xpack.security.transport.ssl.keystore.path: /apps/elasticsearch/config/certs/es1.example.com.p12 xpack.security.transport.ssl.truststore.path: /apps/elasticsearch/config/certs/es1.example.com.p12
1.5.2.2:node2:
# ======================== Elasticsearch Configuration ========================= # # NOTE: Elasticsearch comes with reasonable defaults for most settings. # Before you set out to tweak and tune the configuration, make sure you # understand what are you trying to accomplish and the consequences. # # The primary way of configuring a node is via this file. This template lists # the most important settings you may want to configure for a production cluster. # # Please consult the documentation for further information on configuration options: # https://www.elastic.co/guide/en/elasticsearch/reference/index.html # # ---------------------------------- Cluster ----------------------------------- # # Use a descriptive name for your cluster: # cluster.name: nuo-es-cluster # # ------------------------------------ Node ------------------------------------ # # Use a descriptive name for the node: # node.name: node-2 # # Add custom attributes to the node: # #node.attr.rack: r1 # # ----------------------------------- Paths ------------------------------------ # # Path to directory where to store the data (separate multiple locations by comma): # path.data: /data/esdata # # Path to log files: # path.logs: /data/eslogs # # ----------------------------------- Memory ----------------------------------- # # Lock the memory on startup: # #bootstrap.memory_lock: true # # Make sure that the heap size is set to about half the memory available # on the system and that the owner of the process is allowed to use this # limit. # # Elasticsearch performs poorly when the system is swapping the memory. # # ---------------------------------- Network ----------------------------------- # # By default Elasticsearch is only accessible on localhost. Set a different # address here to expose this node on the network: # network.host: 0.0.0.0 # # By default Elasticsearch listens for HTTP traffic on the first free port it # finds starting at 9200. Set a specific HTTP port here: # http.port: 9200 # # For more information, consult the network module documentation. # # --------------------------------- Discovery ---------------------------------- # # Pass an initial list of hosts to perform discovery when this node is started: # The default list of hosts is ["127.0.0.1", "[::1]"] # discovery.seed_hosts: ["192.168.84.132", "192.168.84.133","192.168.84.134"] # # Bootstrap the cluster using an initial set of master-eligible nodes: # cluster.initial_master_nodes: ["192.168.84.132", "192.168.84.133","192.168.84.134"] # # For more information, consult the discovery and cluster formation module documentation. # # --------------------------------- Readiness ---------------------------------- # # Enable an unauthenticated TCP readiness endpoint on localhost # #readiness.port: 9399 # # ---------------------------------- Various ----------------------------------- # # Allow wildcard deletion of indices: # action.destructive_requires_name: true action.destructive_requires_name: true xpack.security.enabled: true xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.keystore.path: /apps/elasticsearch/config/certs/es2.example.com/es2.example.com.p12
xpack.security.transport.ssl.truststore.path: /apps/elasticsearch/config/certs/es2.example.com/es2.example.com.p12
1.5.2.3:node3:
# ======================== Elasticsearch Configuration ========================= # # NOTE: Elasticsearch comes with reasonable defaults for most settings. # Before you set out to tweak and tune the configuration, make sure you # understand what are you trying to accomplish and the consequences. # # The primary way of configuring a node is via this file. This template lists # the most important settings you may want to configure for a production cluster. # # Please consult the documentation for further information on configuration options: # https://www.elastic.co/guide/en/elasticsearch/reference/index.html # # ---------------------------------- Cluster ----------------------------------- # # Use a descriptive name for your cluster: # cluster.name: nuo-es-cluster # # ------------------------------------ Node ------------------------------------ # # Use a descriptive name for the node: # node.name: node-3 # # Add custom attributes to the node: # #node.attr.rack: r1 # # ----------------------------------- Paths ------------------------------------ # # Path to directory where to store the data (separate multiple locations by comma): # path.data: /data/esdata # # Path to log files: # path.logs: /data/eslogs # # ----------------------------------- Memory ----------------------------------- # # Lock the memory on startup: # #bootstrap.memory_lock: true # # Make sure that the heap size is set to about half the memory available # on the system and that the owner of the process is allowed to use this # limit. # # Elasticsearch performs poorly when the system is swapping the memory. # # ---------------------------------- Network ----------------------------------- # # By default Elasticsearch is only accessible on localhost. Set a different # address here to expose this node on the network: # network.host: 0.0.0.0 # # By default Elasticsearch listens for HTTP traffic on the first free port it # finds starting at 9200. Set a specific HTTP port here: # http.port: 9200 # # For more information, consult the network module documentation. # # --------------------------------- Discovery ---------------------------------- # # Pass an initial list of hosts to perform discovery when this node is started: # The default list of hosts is ["127.0.0.1", "[::1]"] # discovery.seed_hosts: ["192.168.84.132", "192.168.84.133","192.168.84.134"] # # Bootstrap the cluster using an initial set of master-eligible nodes: # cluster.initial_master_nodes: ["192.168.84.132", "192.168.84.133","192.168.84.134"] # # For more information, consult the discovery and cluster formation module documentation. # # --------------------------------- Readiness ---------------------------------- # # Enable an unauthenticated TCP readiness endpoint on localhost # #readiness.port: 9399 # # ---------------------------------- Various ----------------------------------- # # Allow wildcard deletion of indices: # action.destructive_requires_name: true xpack.security.enabled: true xpack.security.transport.ssl.enabled: true xpack.security.transport.ssl.keystore.path: /apps/elasticsearch/config/certs/es3.example.com/es3.example.com.p12
xpack.security.transport.ssl.truststore.path: /apps/elasticsearch/config/certs/es3.example.com/es3.example.com.p12
1.5.3:各node节点配置service⽂件
vim /lib/systemd/system/elasticsearch.service [Unit] Description=Elasticsearch Documentation=http://www.elastic.co Wants=network-online.target After=network-online.target [Service] RuntimeDirectory=elasticsearch Environment=ES_HOME=/apps/elasticsearch Environment=ES_PATH_CONF=/apps/elasticsearch/config Environment=PID_DIR=/apps/elasticsearch WorkingDirectory=/apps/elasticsearch User=elasticsearch Group=elasticsearch ExecStart=/apps/elasticsearch/bin/elasticsearch --quiet # StandardOutput is configured to redirect to journalctl since # some error messages may be logged in standard output before # elasticsearch logging system is initialized. Elasticsearch # stores its logs in /var/log/elasticsearch and does not use # journalctl by default. If you also want to enable journalctl # logging, you can simply remove the "quiet" option from ExecStart. StandardOutput=journal StandardError=inherit # Specifies the maximum file descriptor number that can be opened by this process LimitNOFILE=65536 # Specifies the maximum number of processes LimitNPROC=4096 # Specifies the maximum size of virtual memory LimitAS=infinity # Specifies the maximum file size LimitFSIZE=infinity # Disable timeout logic and wait until process is stopped TimeoutStopSec=0 # SIGTERM signal is used to stop the Java process KillSignal=SIGTERM # Send the signal only to the JVM rather than its control group KillMode=process # Java process is never killed SendSIGKILL=no # When a JVM receives a SIGTERM signal it exits with code 143 SuccessExitStatus=143 [Install] WantedBy=multi-user.targetroot@es1:~# systemctl daemon-reload && systemctl start elasticsearch.service &&systemctl enable elasticsearch.service
1.6.1:批量修改默认账户密码
elasticsearch@es1:/apps/elasticsearch$ bin/elasticsearch-setup-passwords interactive #12345678
1.6.2:创建超级
管理员账户
elasticsearch@es1:/apps/elasticsearch$ ./bin/elasticsearch-users useradd nuo -p 12345678 -r superuser
1.7 API示例
root@es1:~# curl -u nuo:12345678 -X GET http://192.168.84.132:9200 #获取集群状态 root@es1:~# curl -u nuo:12345678 -X GET http://192.168.84.132:9200/_cat #集群支持的操作 root@es1:~# curl -u nuo:12345678 -X GET http://192.168.84.132:9200/_cat/master?v #获取master信息 root@es1:~# curl -u nuo:12345678 -X GET http://192.168.84.132:9200/_cat/nodes?v #获取node节点信息 root@es1:~# curl -u nuo:12345678 -X GET http://192.168.84.132:9200/_cat/health?v #获取集群心跳信息 root@es1:~# curl -u nuo:12345678 -X PUT http://192.168.84.132:9200/test_index?pretty #创建索引test_index,pretty 为格式序列化 root@es1:~# curl -u nuo:12345678 -X GET http://192.168.84.132:9200/test_index?pretty #查看索引 root@es1:~# curl -u nuo:12345678 -X POST "http://192.168.84.132:9200/test_index/_doc/1?pretty" -H 'Content-Type: application/json' -d' {"name": "Jack","age": 19}' #上传数据 root@es1:~# curl -u nuo:12345678 -X GET "http://192.168.84.132:9200/test_index/_doc/1?pretty" #查看文档 root@es1:~# curl -u nuo:12345678 -X PUT http://192.168.84.132:9200/test_index/_settings -H 'content-Type:application/json' -d '{"number_of_replicas": 2}' #修改副本数,副本数可动态调整 root@es1:~# curl -u nuo:12345678 -X GET http://192.168.84.132:9200/test_index/_settings?pretty #查看索引设置 root@es1:~# curl -u nuo:12345678 -X DELETE "http://192.168.84.132:9200/test_index?pretty" #删除索引 root@es1:~# curl -u nuo:12345678 -X POST "http://192.168.84.132:9200/test_index/_close" #关闭索引 root@es1:~# curl -u nuo:12345678 -X POST "http://192.168.84.132:9200/test_index/_open?pretty" #打开索引
#修改集群每个节点的最大可分配的分片数,es7默认为1000,用完后创建新的分片报错误状态码400
root@es1:~# curl -u nuo:12345678 -X PUT http://192.168.84.132:9200/_cluster/settings -H 'Content-Type: application/json' -d' { "persistent" : { "cluster.max_shards_per_node" : "1000000" } }'
#磁盘最低和最高使用百分比95%,默认85%不会在当前节点创新新的分配副本、90%开始将副本移动至其它节点、95所有索引只读。
root@es1:~# curl -u nuo:12345678 -X PUT http://192.168.84.132:9200/_cluster/settings -H 'Content-Type: application/json' -d' { "persistent": { "cluster.routing.allocation.disk.watermark.low": "95%", "cluster.routing.allocation.disk.watermark.high": "95%" } }'
二、logstash简介及案例
1、基于本地文件的日志收集-步骤
本地安装logstash,目前已经自带JDK,早期logstash的JDK版本选择 https://www.elastic.co/cn/support/matrix#matrix_jvm logsatsh配置文件编写、读取指定的一个或多个不同路径或不同类型日志文件 配置文件语法检测 启动logstash elasticsearch验证数据2、logstash安装及环境测试
cd /usr/loacl/src/
rpm -ivh logstash-8.5.1-x86_64.rpm # service文件的启动用户配置 vim /lib/systemd/system/logstash.service[Unit] Description=logstash [Service] Type=simple User=root Group=root # Load env vars from /etc/default/ and /etc/sysconfig/ if they exist. # Prefixing the path with '-' makes it try to load, but if the file doesn't # exist, it continues onward. EnvironmentFile=-/etc/default/logstash EnvironmentFile=-/etc/sysconfig/logstash ExecStart=/usr/share/logstash/bin/logstash "--path.settings" "/etc/logstash" Restart=always WorkingDirectory=/ Nice=19 LimitNOFILE=16384 # When stopping, how long to wait before giving up and sending SIGKILL? # Keep in mind that SIGKILL on a process can cause data loss. TimeoutStopSec=infinity [Install] WantedBy=multi-user.target
systemctl daemon-reload
3、测试标准输出
/usr/share/logstash/bin/logstash -e 'input { stdin{} } output { stdout{ codec => rubydebug }}' #直接启动logsatsh测试标准输入和输出hello world { "@version" => "1", "message" => "hello world", "@timestamp" => 2023-03-13T13:24:53.157946946Z, "event" => { "original" => "hello world" }, "host" => { "hostname" => "logstash" } }
root@logstash conf.d]# vim log-file.conf
input{ stdin{} } output{ file{ path => "tmp/logsatsh-test.log" } } 命令:/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/log-file.conf
[INFO ] 2023-03-13 21:32:38.428 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
test1
[INFO ] 2023-03-13 21:33:41.999 [[main]>worker0] file - Opening file {:path=>"/usr/share/logstash/tmp/logsatsh-test.log"}
[INFO ] 2023-03-13 21:33:42.002 [[main]>worker0] file - Creating directory {:directory=>"/usr/share/logstash/tmp"}
[INFO ] 2023-03-13 21:33:53.182 [[main]>worker0] file - Closing file /usr/share/logstash/tmp/logsatsh-test.log
4、基于logstash收集单个文件并输出到elasticsearch
vim /etc/logstash/conf.d/syslog-to-es.conf input { file { path => "/var/log/syslog" type => "systemlog" start_position => "beginning" stat_interval => "1" } } output { if [type] == "systemlog" { elasticsearch { hosts => ["192.168.84.132:9200"] index => "magedu-systemlog-%{+YYYY.MM.dd}"
user => "nuo"
password => "12345678"
}}
}
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/syslog-to-es.conf -t
systemctl start logstash && systemctl enable logstash
三、Kibana基础、展示接口的配置与应用,以及认证功能
1、kibana简介
Kibana 是一款开源的数据分析和可视化平台,它是 Elastic Stack 成员之一,设计用于和Elasticsearch 协作,可以使用 Kibana 对Elasticsearch 索引中的数据进行搜索、查看、交互操作,在kibana可以很方便的利用图表、表格及地图对数据进行多元化的分析和呈现。 Kibana 可以使大数据通俗易懂,其使用很简单,基于浏览器的界面便于快速创建和分享动态数据仪表板来追踪 Elasticsearch 的实时数据变化。2、kibana安装
root@es1:/usr/local/src#rpm -ivh kibana-8.5.1-x86_64.rpm root@es1:/usr/local/src#vim /etc/kibana/kibana.yml 修改下面的配置 server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://192.168.84.132:9200"] elasticsearch.username: "kibana_system" elasticsearch.password: "12345678" i18n.locale: "zh-CN" root@es1:/usr/local/src# systemctl restart kibana.service root@es1:/usr/local/src# systemctl enable kibana.service root@es1:/usr/local/src# lsof -i:5601 root@es1:/usr/local/src# tail -f /var/log/kibana/kibana.log
3、kibana使用
(1)创建索引
Stack Management-->数据视图-->创建数据视图
(2)验证数据
discover-->选择自己的数据视图
标签:ELK,es1,root,apps,192.168,elasticsearch,简单,安装,logstash From: https://www.cnblogs.com/gengxiaonuo/p/17212617.html