首页 > 其他分享 >二进制部署elasticsearch集群

二进制部署elasticsearch集群

时间:2024-03-27 19:55:23浏览次数:31  
标签:name elastic 二进制 kibana 集群 elasticsearch security event

部署elasticsearch7.17.3版本

背景:

  业务有需求看evenet事件,由于etcd集群中的event key频繁打到200w+,对etcd集群压力非常大,每次都需要手动删除event key,非常耗时间,而且风险特别好,恢复时间慢等缺点。

解决方案:

  1,拆分etcd集群中的event key , 在本机上创建一个新的etcd集群,需要更改端口:2479 2480 ,一件部署evenet etcd 后续会补充。。。

      2,把整个集群的event的事件 存入到es集群。

官网:https://www.elastic.co/

下载地址:https://www.elastic.co/cn/downloads/past-releases/

机器配置:

10.10.11.143 126m32c
10.10.11.25 126m32c
10.10.11.40 126m32c

系统配置:

1,各节点通信采用主机名的方式,这种方式与 IP 地址相比较更具有扩展性。所有节点配置 hosts,修改 /etc/hosts

cat > /etc/hosts <<EOF
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

10.10.11.143  node-1
10.10.11.25  node-2
10.10.11.40  node-3
EOF

2,所有节点关闭防火墙,selinux

# 关闭并禁用 firewalld, dnsmasq, NetworkManager
systemctl disable --now firewalld

# 临时关闭 selinux
setenforce 0
# 永久关闭 selinux
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

3,所有节点修改 /etc/security/limits.conf 文件,添加以下配置

# 临时设置
ulimit -SHn 65535

# 永久设置
sed -i '/^# End/i\* soft nofile    655350' /etc/security/limits.conf
sed -i '/^# End/i\* hard nofile    131072' /etc/security/limits.conf
sed -i '/^# End/i\* soft nproc    655350' /etc/security/limits.conf
sed -i '/^# End/i\* hard nproc    655350' /etc/security/limits.conf
sed -i '/^# End/i\* soft memlock   unlimited' /etc/security/limits.conf
sed -i '/^# End/i\* hard memlock   unlimited' /etc/security/limits.conf

4,所有节点修改内核参数

cat >> /etc/sysctl.conf <<EOF
vm.max_map_count=262144
EOF

sysctl -p

安装 Elasticsearch

1,从官网下载 Elasticsearch 安装包

wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.3-linux-x86_64.tar.gz

 2,所有节点创建目录,解压安装 Elasticsearch

mkdir /home/work &&  tar -xvf elasticsearch-7.17.3-linux-x86_64.tar.gz -C /home/work && mv elasticsearch-7.17.3 elasticsearch

 3,所有节点添加 Elasticsearch 服务启动用户

useradd elastic

4,所有节点修改 elasticsearch 配置文件 /home/work/elasticsearch/config/elasticsearch.yml,主要修改的内容如下

cluster.name: es-cluster
node.name: node-1
path.data: /data/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["10.10.11.91", "10.10.11.92", "10.10.11.93"]
cluster.initial_master_nodes: ["node-1", "node-2", "node-3"]
#注意: 每个节点的 node.name 名称不同
5,所有节点创建 elasticsearch 服务数据以及日志存放目录
mkdir -p /data/elasticsearch /var/log/elasticsearch
chown -R elastic:elastic /data/elasticsearch /var/log/elasticsearch /home/work/elasticsearch

#注意如果在这个最好的是在nfs挂载目录哈,否则单机磁盘是不够用的!!!

6,所有节点修改 /home/work/elasticsearch/bin/elasticsearch 文件,在文件开头添加以下环境变量配置

#!/bin/bash

# CONTROLLING STARTUP:
#
# This script relies on a few environment variables to determine startup
# behavior, those variables are:
#
#   ES_PATH_CONF -- Path to config directory
#   ES_JAVA_OPTS -- External Java Opts on top of the defaults set
#
# Optionally, exact memory values can be set using the `ES_JAVA_OPTS`. Example
# values are "512m", and "10g".
#
#   ES_JAVA_OPTS="-Xms8g -Xmx8g" ./bin/elasticsearch

# 添加以下配置
export ES_JAVA_HOME=/usr/local/elasticsearch/jdk
export PATH=$ES_JAVA_HOME/bin:$PATH

7, 所有节点配置 systemctl 管理 elasticsearch 服务

cat > /usr/lib/systemd/system/elasticsearch.service <<EOF
[Unit]
Description=elasticsearch
After=network.target

[Service]
Type=forking
User=elastic
ExecStart=/home/work/elasticsearch/bin/elasticsearch -d
PrivateTmp=true
# 指定此进程可以打开的最大文件数
LimitNOFILE=65535
# 指定此进程可以打开的最大进程数
LimitNPROC=65535
# 最大虚拟内存
LimitAS=infinity
# 最大文件大小
LimitFSIZE=infinity
# 超时设置 0-永不超时
TimeoutStopSec=0
# SIGTERM是停止java进程的信号
KillSignal=SIGTERM
# 信号只发送给给JVM
KillMode=process
# java进程不会被杀掉
SendSIGKILL=no
# 正常退出状态
SuccessExitStatus=143
LimitMEMLOCK=infinity

[Install]
WantedBy=multi-user.target
EOF

8, 所有节点启动 elasticsearch 服务,并配置开机启动

systemctl enable --now elasticsearch

9, 检查集群状态

curl --user elastic:Elastic -XGET 'http://localhost:9200/_cluster/health/?pretty'
{
  "cluster_name" : "es-cluster",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 3,
  "number_of_data_nodes" : 3,
  "active_primary_shards" : 11,
  "active_shards" : 22,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

# 查询节点信息
curl --user elastic:Elastic -XGET 'http://localhost:9200/_cat/nodes?pretty'

配置 X-Pack 插件

1, 生成证书,这一步只要在 node-1 节点上执行即可

cd /home/work/elasticsearch && bin/elasticsearch-certutil ca -out config/elastic-certificates.p12 -pass "123456"

 2, 创建客户端使用的证书(kibana 需要使用该证书)


openssl pkcs12 -nodes -passin pass:"123456" -in elastic-certificates.p12 -out elastic-ca.pem

3, 拷贝生成的证书到其他节点

scp  config/{elastic-certificates.p12,elasticsearch.keystore} [email protected]:/usr/local/elasticsearch/config/
scp  config/{elastic-certificates.p12,elasticsearch.keystore} [email protected]:/usr/local/elasticsearch/config/

4, 所有节点修改 elasticsearch.yml 文件,在文件的最下面增加以下配置

xpack.security.enabled: true
xpack.license.self_generated.type: basic
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.keystore.password: "123456"
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.password: "123456"
xpack.security.http.ssl.keystore.password: "123456"
xpack.security.http.ssl.truststore.password: "123456"

5,  所有节点重启 Elasticsearch 服务

systemctl restart elasticsearch

6, 为 Elasticsearch 内置用户创建密码

cd /usr/local/elasticsearch
./bin/elasticsearch-setup-passwords interactive    # 手动输入密码
./bin/elasticsearch-setup-passwords auto           # 自动创建随机密码

安装kibana服务

随便找一个节点部署 Kibana 服务即可

1,下载 kibana 安装包

wget https://artifacts.elastic.co/downloads/kibana/kibana-7.17.3-linux-x86_64.tar.gz

2,解压安装 kibana 服务

tar xf kibana-7.17.3-linux-x86_64.tar.gz  &&  mv kibana-7.17.3-linux-x86_64 /home/work/kibana && chown -R elastic:elastic /home/work/kibana

3, 配置 kibana,kibana.yml 编辑  配置文件,主要修改以下内容

server.port: 5601
server.host: "0.0.0.0"
server.publicBaseUrl: "http://10.10.11.91:5601"
elasticsearch.hosts: ["http://10.10.11.91:9200","http://10.10.11.92:9200","http://10.10.11.93:9200"]
kibana.index: ".kibana"
elasticsearch.username: "kibana_system"
elasticsearch.password: "123456"
elasticsearch.ssl.certificateAuthorities: [ "/home/work/kibana/config/elastic-ca.pem" ]
i18n.locale: "zh-CN"

4, 配置 systemctl 管理 kibana 服务

cat > /usr/lib/systemd/system/kibana.service <<EOF
[Unit]
Description=kibana
After=network.target

[Service]
User=elastic
Group=elastic
ExecStart=/home/work/kibana/bin/kibana
ExecStop=/usr/bin/kill -15 $MAINPID
ExecReload=/usr/bin/kill -HUP $MAINPID
Type=simple
RemainAfterExit=yes
PrivateTmp=true
# file size
LimitFSIZE=infinity
# cpu time
LimitCPU=infinity
# virtual memory size
LimitAS=infinity
# open files
LimitNOFILE=65535
# processes/threads
LimitNPROC=64000
# locked memory
LimitMEMLOCK=infinity
# total threads (user+kernel)
TasksMax=infinity
TasksAccounting=false

[Install]
WantedBy=multi-user.target
EOF

 5, 启动 kibana 并配置开机启动 

systemctl enable --now kibana

部署kubernetes-event-exporter组件

1,创建namespace

apiVersion: v1
kind: Namespace
metadata:
  name: etcd-event

2, 创建ServiceAccount

apiVersion: v1
kind: ServiceAccount
metadata:
  namespace: etcd-event
  name: event-exporter

3,  创建ClusterRole

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: event-exporter-clusterrole
rules:
- apiGroups: [""]
  resources: ["events"]
  verbs: ["get", "list", "watch"]

4,创建 ClusterRoleBinding

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: event-exporter-clusterrolebinding
subjects:
- kind: ServiceAccount
  name: event-exporter
  namespace: etcd-event
roleRef:
  kind: ClusterRole
  name: event-exporter-clusterrole
  apiGroup: rbac.authorization.k8s.io

5,创建 ConfigMap

apiVersion: v1
kind: ConfigMap
metadata:
  name: event-exporter-cfg
  namespace: etcd-event
data:
  config.yaml: |
    logLevel: error
    logFormat: json
    route:
      routes:
      - match:
        - receiver: "es"
    receivers:
      - name: "es"
        elasticsearch:
          hosts:
            - http://10.10.11.143:9200
            - http://10.10.11.25:9200
            - http://10.10.11.40:9200
          index: bjyz-ecitest-events
          indexFormat: "bjyz-ecitest-events-{2006-01-02}"
          useEventID: true
          username: elastic
          password: 123456

6,创建deploymonent

apiVersion: apps/v1
kind: Deployment
metadata:
  name: event-exporter
  namespace: lens-metrics
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: event-exporter
        version: v1
    spec:
      serviceAccountName: event-exporter
      containers:
      - name: event-exporter
        image: opsgenie/kubernetes-event-exporter:0.9
        imagePullPolicy: IfNotPresent
        args:
        - -conf=/data/config.yaml
        volumeMounts:
        - mountPath: /data
          name: cfg
      volumes:
        - name: cfg
          configMap:
            name: event-exporter-cfg
  selector:
    matchLabels:
      app: event-exporter
      version: v1

参考文档:

    https://github.com/resmoio/kubernetes-event-exporter  

    http://www.elasticsearch.cn/  

    https://59izt.com/2023/12/26/Linux/041-CentOS7-%E9%83%A8%E7%BD%B2-Elasticsearch-%E9%9B%86%E7%BE%A4/

标签:name,elastic,二进制,kibana,集群,elasticsearch,security,event
From: https://www.cnblogs.com/Direction-of-efforts/p/18100098

相关文章

  • 集群聊天服务器与客户端开发
    服务器服务技术特点如何使用依赖库客户端服务器服务器代码在https://gitee.com/ericling666/sponge,对应的客户端源代码在https://gitee.com/ericling666/spongeclient。对本项目的演示,请看视频【集群聊天服务器与仿微信客户端开发,服务器基于muduo,mysql,redis,客户端基......
  • Linux - 搭建一套Apache大数据集群
     一、服务器操作系统主机名操作系统node01Centos7.9node02Centos7.9node03Centot7.9 二、大数据服务版本服务版本下载Zookeeper3.5.7DownloadHadoop3.3.6DownloadHive3.xDownloadHbase2.xDownloadSpark3.xDownload......
  • Elasticsearch 8.x以上实现初始化用户密码,elasticsearch-setup-passwords interactive
    Elasticsearch8.x以上,默认自动开启x-pack验证,在首次启动时,会设置密码,当再次执行elasticsearch-setup-passwordsinteractive就会报错,提示使用elasticsearch-reset-passwords,但是用户太多,还是想要能像8.x以下一直敲回车,设置密码。今天偶然Elasticsearch报错,发现一个方法可以使用,......
  • elasticsearch esrally 性能测试实操
    目录准备数据docker测试环境准备正式测试最新在用esrally测试es的性能,今天把相关操作记录下。本人非专业测试,各位大佬请轻喷。关于esrally的文档,请移步:esrally测试esrally是个elastic官方的测试工具,可以对es进行压力测试。其运行对环境有一定要求,如python版本,JDK......
  • 一套集群实时在线扩容为两套集群方案
    一套集群实时在线扩容为两套集群方案解决问题:当一套集群A承担不了业务压力,需要在A集群在线情况下,扩出来一套与A集群完全一样的B集群,之后从业务层面控制A和B各自承担原A承担的一半业务压力。1、配置A集群1.1A集群创建用户并赋权selectfrompg_userwhere;......
  • 使用K8S集群运行MongoDB7.0
    参考:https://hub.docker.com/_/mongo创建PVC创建PVC用于数据持久化#catmongodb-pvc.yamlapiVersion:v1kind:PersistentVolumeClaimmetadata:name:mongodb-pvcspec:accessModes:-ReadWriteOnceresources:requests:storage:22Gistorag......
  • Hadoop集群
    今天的一套题,顺便解决了之前的一套不太会的题目,快哉!分享一下hadoop取证过程中遇到的问题!1、拿到服务器镜像,第一时间去看历史命令,因为历史命令可以清楚地看到嫌疑人之前在计算机上干过什么事情,这里我们发现在data.E01这台机子上存在docker容器,容器里面很明显是一个hadoop的集群......
  • spark-submit 主要参数详细说明及Standalone集群最佳实践
    文章目录1.前言2.参数说明3.Standalone集群最佳实践1.前言部署提交应用到spark集群,可能会用到spark-submit工具,鉴于网上的博客质量残差不齐,且有很多完全是无效且错误的配置,没有搞明白诸如--total-executor-cores、--executor-cores、--num-executors的关系......
  • java用es报错ElasticsearchStatusException[Elasticsearch exception [type=x_content
    java报错ElasticsearchStatusException[Elasticsearchexception[type=x_content_parse_exception,reason=[1:55][bool]failedtoparsefield[must]]];nested:ElasticsearchException[Elasticsearchexception[type=parsing_exception,reason=[match]unknowntoke......
  • 上传图片前端使用base64数据格式展示,后端数据库存储二进制文件
    添加时上传图片upload.render({elem:'#docImg',url:Feng.ctxPath+'/doctor/upload'//改成您自己的上传接口,before:function(obj){//预读本地文件示例,不支持ie8obj.preview(function(index,file,result){......