部署elasticsearch7.17.3版本
背景:
业务有需求看evenet事件,由于etcd集群中的event key频繁打到200w+,对etcd集群压力非常大,每次都需要手动删除event key,非常耗时间,而且风险特别好,恢复时间慢等缺点。
解决方案:
1,拆分etcd集群中的event key , 在本机上创建一个新的etcd集群,需要更改端口:2479 2480 ,一件部署evenet etcd 后续会补充。。。
2,把整个集群的event的事件 存入到es集群。
官网:https://www.elastic.co/
下载地址:https://www.elastic.co/cn/downloads/past-releases/
机器配置:
10.10.11.143 | 126m32c |
10.10.11.25 | 126m32c |
10.10.11.40 | 126m32c |
系统配置:
1,各节点通信采用主机名的方式,这种方式与 IP 地址相比较更具有扩展性。所有节点配置 hosts,修改 /etc/hosts
cat > /etc/hosts <<EOF
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.10.11.143 node-1
10.10.11.25 node-2
10.10.11.40 node-3
EOF
2,所有节点关闭防火墙,selinux
# 关闭并禁用 firewalld, dnsmasq, NetworkManager
systemctl disable --now firewalld
# 临时关闭 selinux
setenforce 0
# 永久关闭 selinux
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
3,所有节点修改 /etc/security/limits.conf 文件,添加以下配置
# 临时设置
ulimit -SHn 65535
# 永久设置
sed -i '/^# End/i\* soft nofile 655350' /etc/security/limits.conf
sed -i '/^# End/i\* hard nofile 131072' /etc/security/limits.conf
sed -i '/^# End/i\* soft nproc 655350' /etc/security/limits.conf
sed -i '/^# End/i\* hard nproc 655350' /etc/security/limits.conf
sed -i '/^# End/i\* soft memlock unlimited' /etc/security/limits.conf
sed -i '/^# End/i\* hard memlock unlimited' /etc/security/limits.conf
4,所有节点修改内核参数
cat >> /etc/sysctl.conf <<EOF
vm.max_map_count=262144
EOF
sysctl -p
安装 Elasticsearch
1,从官网下载 Elasticsearch 安装包
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.3-linux-x86_64.tar.gz
2,所有节点创建目录,解压安装 Elasticsearch
mkdir /home/work && tar -xvf elasticsearch-7.17.3-linux-x86_64.tar.gz -C /home/work && mv elasticsearch-7.17.3 elasticsearch
3,所有节点添加 Elasticsearch 服务启动用户
useradd elastic
4,所有节点修改 elasticsearch 配置文件 /home/work/elasticsearch/config/elasticsearch.yml,主要修改的内容如下
cluster.name: es-cluster
node.name: node-1
path.data: /data/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["10.10.11.91", "10.10.11.92", "10.10.11.93"]
cluster.initial_master_nodes: ["node-1", "node-2", "node-3"]
#注意: 每个节点的 node.name 名称不同
5,所有节点创建 elasticsearch 服务数据以及日志存放目录
mkdir -p /data/elasticsearch /var/log/elasticsearch
chown -R elastic:elastic /data/elasticsearch /var/log/elasticsearch /home/work/elasticsearch
#注意如果在这个最好的是在nfs挂载目录哈,否则单机磁盘是不够用的!!!
6,所有节点修改 /home/work/elasticsearch/bin/elasticsearch 文件,在文件开头添加以下环境变量配置
#!/bin/bash
# CONTROLLING STARTUP:
#
# This script relies on a few environment variables to determine startup
# behavior, those variables are:
#
# ES_PATH_CONF -- Path to config directory
# ES_JAVA_OPTS -- External Java Opts on top of the defaults set
#
# Optionally, exact memory values can be set using the `ES_JAVA_OPTS`. Example
# values are "512m", and "10g".
#
# ES_JAVA_OPTS="-Xms8g -Xmx8g" ./bin/elasticsearch
# 添加以下配置
export ES_JAVA_HOME=/usr/local/elasticsearch/jdk
export PATH=$ES_JAVA_HOME/bin:$PATH
7, 所有节点配置 systemctl 管理 elasticsearch 服务
cat > /usr/lib/systemd/system/elasticsearch.service <<EOF
[Unit]
Description=elasticsearch
After=network.target
[Service]
Type=forking
User=elastic
ExecStart=/home/work/elasticsearch/bin/elasticsearch -d
PrivateTmp=true
# 指定此进程可以打开的最大文件数
LimitNOFILE=65535
# 指定此进程可以打开的最大进程数
LimitNPROC=65535
# 最大虚拟内存
LimitAS=infinity
# 最大文件大小
LimitFSIZE=infinity
# 超时设置 0-永不超时
TimeoutStopSec=0
# SIGTERM是停止java进程的信号
KillSignal=SIGTERM
# 信号只发送给给JVM
KillMode=process
# java进程不会被杀掉
SendSIGKILL=no
# 正常退出状态
SuccessExitStatus=143
LimitMEMLOCK=infinity
[Install]
WantedBy=multi-user.target
EOF
8, 所有节点启动 elasticsearch 服务,并配置开机启动
systemctl enable --now elasticsearch
9, 检查集群状态
curl --user elastic:Elastic -XGET 'http://localhost:9200/_cluster/health/?pretty'
{
"cluster_name" : "es-cluster",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 3,
"number_of_data_nodes" : 3,
"active_primary_shards" : 11,
"active_shards" : 22,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}
# 查询节点信息
curl --user elastic:Elastic -XGET 'http://localhost:9200/_cat/nodes?pretty'
配置 X-Pack 插件
1, 生成证书,这一步只要在 node-1 节点上执行即可
cd /home/work/elasticsearch && bin/elasticsearch-certutil ca -out config/elastic-certificates.p12 -pass "123456"
2, 创建客户端使用的证书(kibana 需要使用该证书)
openssl pkcs12 -nodes -passin pass:"123456" -in elastic-certificates.p12 -out elastic-ca.pem
3, 拷贝生成的证书到其他节点
scp config/{elastic-certificates.p12,elasticsearch.keystore} [email protected]:/usr/local/elasticsearch/config/
scp config/{elastic-certificates.p12,elasticsearch.keystore} [email protected]:/usr/local/elasticsearch/config/
4, 所有节点修改 elasticsearch.yml 文件,在文件的最下面增加以下配置
xpack.security.enabled: true
xpack.license.self_generated.type: basic
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.keystore.password: "123456"
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.password: "123456"
xpack.security.http.ssl.keystore.password: "123456"
xpack.security.http.ssl.truststore.password: "123456"
5, 所有节点重启 Elasticsearch 服务
systemctl restart elasticsearch
6, 为 Elasticsearch 内置用户创建密码
cd /usr/local/elasticsearch
./bin/elasticsearch-setup-passwords interactive # 手动输入密码
./bin/elasticsearch-setup-passwords auto # 自动创建随机密码
安装kibana服务
随便找一个节点部署 Kibana 服务即可
1,下载 kibana 安装包
wget https://artifacts.elastic.co/downloads/kibana/kibana-7.17.3-linux-x86_64.tar.gz
2,解压安装 kibana 服务
tar xf kibana-7.17.3-linux-x86_64.tar.gz && mv kibana-7.17.3-linux-x86_64 /home/work/kibana && chown -R elastic:elastic /home/work/kibana
3, 配置 kibana,kibana.yml 编辑 配置文件,主要修改以下内容
server.port: 5601
server.host: "0.0.0.0"
server.publicBaseUrl: "http://10.10.11.91:5601"
elasticsearch.hosts: ["http://10.10.11.91:9200","http://10.10.11.92:9200","http://10.10.11.93:9200"]
kibana.index: ".kibana"
elasticsearch.username: "kibana_system"
elasticsearch.password: "123456"
elasticsearch.ssl.certificateAuthorities: [ "/home/work/kibana/config/elastic-ca.pem" ]
i18n.locale: "zh-CN"
4, 配置 systemctl 管理 kibana 服务
cat > /usr/lib/systemd/system/kibana.service <<EOF
[Unit]
Description=kibana
After=network.target
[Service]
User=elastic
Group=elastic
ExecStart=/home/work/kibana/bin/kibana
ExecStop=/usr/bin/kill -15 $MAINPID
ExecReload=/usr/bin/kill -HUP $MAINPID
Type=simple
RemainAfterExit=yes
PrivateTmp=true
# file size
LimitFSIZE=infinity
# cpu time
LimitCPU=infinity
# virtual memory size
LimitAS=infinity
# open files
LimitNOFILE=65535
# processes/threads
LimitNPROC=64000
# locked memory
LimitMEMLOCK=infinity
# total threads (user+kernel)
TasksMax=infinity
TasksAccounting=false
[Install]
WantedBy=multi-user.target
EOF
5, 启动 kibana 并配置开机启动
systemctl enable --now kibana
部署kubernetes-event-exporter组件
1,创建namespace
apiVersion: v1
kind: Namespace
metadata:
name: etcd-event
2, 创建ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: etcd-event
name: event-exporter
3, 创建ClusterRole
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: event-exporter-clusterrole
rules:
- apiGroups: [""]
resources: ["events"]
verbs: ["get", "list", "watch"]
4,创建 ClusterRoleBinding
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: event-exporter-clusterrolebinding
subjects:
- kind: ServiceAccount
name: event-exporter
namespace: etcd-event
roleRef:
kind: ClusterRole
name: event-exporter-clusterrole
apiGroup: rbac.authorization.k8s.io
5,创建 ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: event-exporter-cfg
namespace: etcd-event
data:
config.yaml: |
logLevel: error
logFormat: json
route:
routes:
- match:
- receiver: "es"
receivers:
- name: "es"
elasticsearch:
hosts:
- http://10.10.11.143:9200
- http://10.10.11.25:9200
- http://10.10.11.40:9200
index: bjyz-ecitest-events
indexFormat: "bjyz-ecitest-events-{2006-01-02}"
useEventID: true
username: elastic
password: 123456
6,创建deploymonent
apiVersion: apps/v1
kind: Deployment
metadata:
name: event-exporter
namespace: lens-metrics
spec:
replicas: 1
template:
metadata:
labels:
app: event-exporter
version: v1
spec:
serviceAccountName: event-exporter
containers:
- name: event-exporter
image: opsgenie/kubernetes-event-exporter:0.9
imagePullPolicy: IfNotPresent
args:
- -conf=/data/config.yaml
volumeMounts:
- mountPath: /data
name: cfg
volumes:
- name: cfg
configMap:
name: event-exporter-cfg
selector:
matchLabels:
app: event-exporter
version: v1
参考文档:
https://github.com/resmoio/kubernetes-event-exporter
https://59izt.com/2023/12/26/Linux/041-CentOS7-%E9%83%A8%E7%BD%B2-Elasticsearch-%E9%9B%86%E7%BE%A4/
标签:name,elastic,二进制,kibana,集群,elasticsearch,security,event From: https://www.cnblogs.com/Direction-of-efforts/p/18100098