首页 > 其他分享 >ElasticSearch 7.14 向已启用XPACK认证的集群增加新的节点

ElasticSearch 7.14 向已启用XPACK认证的集群增加新的节点

时间:2024-11-12 12:20:16浏览次数:1  
标签:7.14 node elastic XPACK cluster ElasticSearch 服务器 security 节点

一、环境现状描述:

      目前的ElasticSearch集群仅有一个单一节点,且这个集群中已建立有索引,索引已包含业务文档数据(超过200G),该集群已经启用XPACK认证,现希望扩展这个集群,增加复制节点,且复制节点启动后,自动从主节点同步数据到新节点。

      目前的ElasticSearch集群节点关键配置情况如下:

Cluster部分配置:
cluster.name: prometheus

Node部分配置:
node.name: node-1

Paths部分配置:
path.data: /data/elastic/esdata
path.logs: /data/elastic/eslog

Network部分配置:
#network.host: 192.168.0.1  【注意:由于当前不需要外网访问ES,所以这里没有改成0.0.0.0】
http.port: 19200

Discovery部分配置
discovery.seed_hosts: ["127.0.0.1", "[::1]"]
cluster.initial_master_nodes: ["node-1"]

Security部分配置:
xpack.security.enabled: true
xpack.license.self_generated.type: basic
xpack.security.transport.ssl.enabled: true

 

二、扩展集群步骤:

操作概述:

1、关于端口,ES用到的端口主要是2个,默认值分别是9200和9300,9200端口大家很熟悉,用于客户端操作es数据或进行es配置,9300端口是es集群内部通讯用的端口,因此在扩展节点时,新服务器和原服务器之间的9300端口必须互通。如果像上面的配置那样修改了默认端口(出于安全考虑不使用默认端口),则应确保使用的端口未被防火墙拦截。

2、在新节点服务器安装相同的ES版本后,原节点上装了哪些插件,新节点也要安装(将原es的plugin目录复制到新ES目录即可)

3、在原服务器生成p12证书文件,拷贝到新服务器,用于节点的认证。

4、修改两个服务器的配置文件,先启动原服务器,再启动新节点服务器,即可完成扩展。

详细步骤:

1、在新节点安装同版本ES【该步骤不再赘述,就是创建运行es的用户,上传es安装文件解压,修改vm.max_map_count值配置,配置ES_JAVA_HOME就可以了,跟安装单节点的ES一样】

2、修改原服务器和新服务器配置,具体如下:

原服务器调整后的配置文件:

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: prometheus
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
node.data: true
node.master: true
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /data/elastic/esdata
#
# Path to log files:
#
path.logs: /data/elastic/eslog
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: 0.0.0.0
network.publish_host: 125.*.*.*【关键:填写当前服务器外网ip】
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
http.port: 19200
transport.tcp.port: 19300
http.cors.enabled: true
http.cors.allow-origin: "*"
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#填写当前服务器的ip和9300端口以及新服务器的ip和9300端口,我这里自定义了集群通讯端口是19300
discovery.seed_hosts: ["125.*.*.*:19300", "58.*.*.*:19300"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
#
# ---------------------------------- Security ----------------------------------
#
#                                 *** WARNING ***
#
# Elasticsearch security features are not enabled by default.
# These features are free, but require configuration changes to enable them.
# This means that users don’t have to provide credentials and can get full access
# to the cluster. Network connections are also not encrypted.
#
# To protect your data, we strongly encourage you to enable the Elasticsearch security features. 
# Refer to the following documentation for instructions.
#
# https://www.elastic.co/guide/en/elasticsearch/reference/7.16/configuring-stack-security.html

xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
View Code

新服务的配置文件:

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: prometheus
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-2
node.data: true
node.master: false
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /data/elastic/esdata
#
# Path to log files:
#
path.logs: /data/elastic/eslogs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: 0.0.0.0
# 【该ip填写能被另一台服务器访问到的ip地址】
network.publish_host: *.*.*.* 
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
http.port: 19200
transport.tcp.port: 19300
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#这里 seed_hosts 只填写原主节点的ip地址和端口,该地址需要能被新节点访问
discovery.seed_hosts: ["*.*.*.*:19300"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["node-1"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
#
# ---------------------------------- Security ----------------------------------
#
#                                 *** WARNING ***
#
# Elasticsearch security features are not enabled by default.
# These features are free, but require configuration changes to enable them.
# This means that users don’t have to provide credentials and can get full access
# to the cluster. Network connections are also not encrypted.
#
# To protect your data, we strongly encourage you to enable the Elasticsearch security features. 
# Refer to the following documentation for instructions.
#
# https://www.elastic.co/guide/en/elasticsearch/reference/7.16/configuring-stack-security.htmlxpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
View Code

【注意:如果你的服务器不在一个局域网中或者在不同网段,则network.publish_host这项就必须配置,配置值应为可以被另一台服务器访问的ip(例如公网ip),否则会导致新节点无法加入集群】

3、在原服务器上生成p12证书(如果之前没有生成过)

bin/elasticsearch-certutil ca -out config/elastic-certificates.p12 -pass""

将证书elastic-certificates.p12文件传到新服务器的es安装目录的config目录下(即elasticsearch.yml文件所在目录),chown调整好证书的所有者为elastic用户,且chmod文件权限设置为755。

【注意!!!不要在新服务器上执行bin目录下的elasticsearch-setup-passwords文件来重新配置密码,因为集群中的节点认证信息必须保持一致,上面我们将p12证书拷贝到新服务器就行了,后续访问新服务器节点时,就可以继续使用之前的elastic用户名密码】

4、启动原服务器上的es节点

bin/elasticsearch -d

5、启动新服务器上的es节点

bin/elasticsearch -d

观察两个服务器的日志情况

tail -f /data/elastic/eslogs/prometheus.log

如果有出错的信息,错误信息回显示在这个文件中,按照错误信息调整即可。

【这个日志位于elasticsearch.yml配置文件中的path.logs路径中,文件名就是cluster.name名字】

6、至此配置完毕,分别访问原节点和新节点的分片状况:

原服务器节点情况:

 

新服务器节点情况:

 

网上的很多教程都是全新安装多个节点的ES集群或者原集群没有做XPACK加密的,在增加节点后,需要执行bin目录下的elasticsearch-setup-passwords来设置密码,这种场景不符合当前的需求(当前环境是单个节点集群已经设置了密码且用于生产,不方便重新设置密码),因此写一下这篇文章记录一下。

 

标签:7.14,node,elastic,XPACK,cluster,ElasticSearch,服务器,security,节点
From: https://www.cnblogs.com/zheng-hong-bo/p/18541554

相关文章

  • SpringBoot项目引入Elasticsearch时启动失败
    1、前情提要:https://www.elastic.co/guide/en/elasticsearch/client/java-api-client/current/installation.html以上是Elasticsearch对接Java的官方文档(pom依赖部分)我本地Windows安装的Elasticsearch也是8.15.3版本 2、启动报错***************************APPLICATION......
  • Elasticsearch上创建的index是yellow健康状态的解决方案
    在Elasticsearch中,索引的健康状态(healthstatus)反映了索引的分片分配情况和集群的整体健康状况。这些状态可以帮助您快速了解索引和集群的运行情况。以下是Elasticsearch中索引的三种健康状态及其意义:1.green(绿色)含义:所有主分片(primaryshards)和副本分片(replicashards)都已成功......
  • macOS 下使用 Docker 安装 ElasticSearch(学习环境用)
    当前环境操作系统:macOS15.0.1Docker版本:DockerDesktop:Version4.34.3(170107)DockerEngine:27.2.0安装步骤提示:此部署只为学习使用,没有挂载本地文件1、安装ElasticSearch#安装命令#1.1创建网络somenetwork用于docker间通讯dockernetworkcreateso......
  • 记录一次docker快速启动elasticsearch单机服务
    记录一次docker快速启动elasticsearch单机服务注意事项使用df-h${dir}确定挂载目录磁盘容量避免选择较小磁盘使用lsof-i:${port}确定宿主机端口没有被占用挂载目录赋予可读可写的权限具体步骤cd/home/aicc/docker/mkdiresmkdirdatamkdircon......
  • Elasticsearch+kibana+filebeat的安装及使用
    版本7.6.0自己去官网下载或者私信找我要,jdk是8版本1.ES安装网上有好多安装教程可以自己去搜索这个是我的es文件路径:{“name”:“node-1”,“cluster_name”:“elasticsearch”,“cluster_uuid”:“NIepktULRfepkje3JHw8NA”,“version”:{“number”:......
  • 谈谈全文检索Elasticsearch的核心概念
    Elasticsearch的核心概念1NRT(NearRealtime):近实时两方面:写入数据时,过1秒才会被搜索到,因为内部在分词、录入索引。es搜索时:搜索和分析数据需要秒级出结果。2Cluster:集群包含一个或多个启动着es实例的机器群。通常一台机器起一个es实例。同一网络下,集名一样的多个es实......
  • 安装和启动ElasticSearch
    安装和启动ElasticSearch我们直接使用docker部署好的ElasticSearch访问路径:http://192.168.144.160:9200ES默认端口是9200ES基本使用<dependencies><dependency><groupId>org.springframework.boot</groupId><artifactId>spri......
  • Macbook M1下安装elasticsearch
    一、下载安装包(Kibana的版本需要和Elasticsearch的版本一致。这是官方支持的配置。)地址分别为:ElasticSearch:https://www.elastic.co/cn/downloads/elasticsearchkibana:https://www.elastic.co/cn/downloads/kibana对于MacM1芯片基于ARM架构应该选择Aarch64(另一个X86_64......
  • 使用 【Java】 集成 【Elasticsearch】:详细教程
    Elasticsearch是一个开源的分布式搜索引擎,它能够快速地存储、搜索和分析大量的文本数据。它基于ApacheLucene构建,广泛应用于日志分析、全文搜索、推荐系统等场景。本文将详细介绍如何在Java项目中集成Elasticsearch,包括如何配置、索引文档、查询数据、以及与Elasticsea......
  • 海量数据迁移:Elasticsearch到OpenSearch的无缝迁移策略与实践
    文章目录一.迁移背景二.迁移分析三.方案制定3.1使用工具迁移3.2脚本迁移四.方案建议一.迁移背景目前有两个es集群,版本为5.2.2和7.16.0,总数据量为700T。迁移过程需要不停服务迁移,允许一小时不写数据,但是需要提供数据存储方案。迁移到opensearch的版本为1.3.4。二.迁移分......