首页 > 其他分享 >es8.15集群部署(tsl认证)

es8.15集群部署(tsl认证)

时间:2024-09-14 14:23:54浏览次数:12  
标签:tsl elastic ## ca rac01 p12 elasticsearch 集群 es8.15

环境:
192.168.1.102
192.168.1.103
192.168.1.105

--------------------------------------------基础安装-----------------------------------

系统配置
每个机器上都要执行
1.系统参数配置
修改limits.conf配置文件
vi /etc/security/limits.conf
root用户下添加如下2两项,然后退出使用elasticsearch用户登陆,使其生效
* hard nofile 65536
* soft nofile 65536

2.修改sysctl.conf文件
vi /etc/sysctl.conf
vm.max_map_count=262144

然后执行如下命令:
[root@localhost ~]# sysctl -p

3.关闭防火墙
systemctl status firewalld.service
systemctl stop firewalld.service
systemctl disable firewalld.service

 

4.安装java(已经不需要,es7之后使用自动的java了)
安装连接:https://www.cnblogs.com/hxlasky/p/14775706.html
确保java版本在1.8以上
[root@rac01 soft]# java -version
java version "1.8.0_291"
Java(TM) SE Runtime Environment (build 1.8.0_291-b10)
Java HotSpot(TM) 64-Bit Server VM (build 25.291-b10, mixed mode)

5.下载需要的安装版本
我这里下载的是elasticsearch-8.15.1-linux-x86_64.tar.gz
下载地址:
https://www.elastic.co/cn/downloads/past-releases#elasticsearch

 

6.创建中间件安装目录和数据文件、日志文件目录
每台机器上都要执行
[root@es soft]# mkdir -p /usr/local/services
[root@es soft]# mkdir -p /home/middle/elasticsearch/data
[root@es soft]# mkdir -p /home/middle/elasticsearch/logs

 

7.创建用户和用户组
每台机器上都要执行
groupadd -g 1500 elasticsearch
useradd -u 1500 -g elasticsearch elasticsearch
passwd elasticsearch

 

8.上传到服务器
每台机器上都要执行
解压缩并创建数据目录
[root@rac01 soft]# cd /soft
[root@rac01 soft]# tar -xvf elasticsearch-8.15.1-linux-x86_64.tar.gz
[root@rac01 soft]# mv elasticsearch-8.15.1 /usr/local/services/elasticsearch

 

9.将elasticsearch目录权限修改为elasticsearch
每台机器上都要执行
[root@es config]# cd /usr/local/services
[root@es services]# chown -R elasticsearch.elasticsearch ./elasticsearch

同时修改数据文件和日志文件目录给到elasticsearch
[root@es services]# cd /home/middle
[root@es middle]#chown -R elasticsearch.elasticsearch ./elasticsearch

 

10.创建备份目录
每台机器上都要执行
[root@rac01 home]#mkdir -p /home/middle/esbak
[root@rac01 home]#cd /home/middle
[root@rac01 home]#chown -R elasticsearch.elasticsearch ./esbak

 

11.修改配置文件
每台机器上都要执行

[root@rac01 middle]# su - elasticsearch
[elasticsearch@rac01 ~]$ cd /usr/local/services/elasticsearch/config
[elasticsearch@es config]$ vi elasticsearch.yml

cluster.name: escluster_ysd
node.name: node01
path.data: /home/middle/elasticsearch/data
path.logs: /home/middle/elasticsearch/logs
network.host: 192.168.1.102
http.port: 19200
##discovery.zen.minimum_master_nodes: 2 ##去掉该参数,es8中没有该参数了的
discovery.seed_hosts: ["192.168.1.102", "192.168.1.103","192.168.1.105"]
cluster.initial_master_nodes: ["node01", "node02","node03"]
path.repo: /home/middle/esbak
http.cors.enabled: true
http.cors.allow-origin: "*"

xpack.security.enabled: false  ##这里先禁用安全认证,后面再启用

 

其他机器的配置文件:
将配置文件scp到另外的机器,然后相应修改红色部分
node.name分别修改为 node02 和 node03
network.host分别修改为对应机器的ip地址

 

12.修改jvm参数( /usr/local/services/elasticsearch/config/jvm.options )
每台机器上都要执行

[root@master ~]# more /usr/local/services/elasticsearch/config/jvm.options
################################################################
##
## JVM configuration
##
################################################################
##
## WARNING: DO NOT EDIT THIS FILE. If you want to override the
## JVM options in this file, or set any additional options, you
## should create one or more files in the jvm.options.d
## directory containing your adjustments.
##
## See https://www.elastic.co/guide/en/elasticsearch/reference/8.15/jvm-options.html
## for more information.
##
################################################################



################################################################
## IMPORTANT: JVM heap size
################################################################
##
## The heap size is automatically configured by Elasticsearch
## based on the available memory in your system and the roles
## each node is configured to fulfill. If specifying heap is
## required, it should be done through a file in jvm.options.d,
## which should be named with .options suffix, and the min and
## max should be set to the same value. For example, to set the
## heap to 4 GB, create a new file in the jvm.options.d
## directory containing these lines:
##
## -Xms4g
## -Xmx4g
##
## See https://www.elastic.co/guide/en/elasticsearch/reference/8.15/heap-size.html
## for more information
##
################################################################

-Xms3g
-Xmx3g


################################################################
## Expert settings
################################################################
##
## All settings below here are considered expert settings. Do
## not adjust them unless you understand what you are doing. Do
## not edit them in this file; instead, create a new file in the
## jvm.options.d directory containing your adjustments.
##
################################################################

-XX:+UseG1GC

## JVM temporary directory
-Djava.io.tmpdir=${ES_TMPDIR}

# Leverages accelerated vector hardware instructions; removing this may
# result in less optimal vector performance
20-:--add-modules=jdk.incubator.vector

## heap dumps

# generate a heap dump when an allocation from the Java heap fails; heap dumps
# are created in the working directory of the JVM unless an alternative path is
# specified
-XX:+HeapDumpOnOutOfMemoryError

# exit right after heap dump on out of memory error
-XX:+ExitOnOutOfMemoryError

# specify an alternative path for heap dumps; ensure the directory exists and
# has sufficient space
-XX:HeapDumpPath=data

# specify an alternative path for JVM fatal error logs
-XX:ErrorFile=logs/hs_err_pid%p.log

## GC logging
-Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,level,pid,tags:filecount=32,filesize=64m

 

13.启动

每个节点上都要执行,这里确保每台机器都能启动
[root@rac01 middle]# su - elasticsearch
[elasticsearch@es ~]$ cd /usr/local/services/elasticsearch/bin
./elasticsearch -d

 

14.这个时候查看集群情况
这个时候是没有配置密码认证的
[elasticsearch@master bin]$ curl http://192.168.1.102:19200/_cat/nodes?v
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
192.168.1.102 14 97 12 0.51 0.88 0.84 cdfhilmrstw - node01
192.168.1.103 15 96 22 0.97 1.00 0.48 cdfhilmrstw * node02
192.168.1.105 10 97 13 2.67 1.68 0.77 cdfhilmrstw - node03

 

[elasticsearch@master bin]$ curl -X GET "192.168.1.102:19200/_cat/indices?v"
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size dataset.size

[elasticsearch@master bin]$ curl http://192.168.1.102:19200/?pretty
{
  "name" : "node01",
  "cluster_name" : "escluster_hxl",
  "cluster_uuid" : "Z9owd8vWT0qa_w9Gx8JPKA",
  "version" : {
    "number" : "8.15.1",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "253e8544a65ad44581194068936f2a5d57c2c051",
    "build_date" : "2024-09-02T22:04:47.310170297Z",
    "build_snapshot" : false,
    "lucene_version" : "9.11.1",
    "minimum_wire_compatibility_version" : "7.17.0",
    "minimum_index_compatibility_version" : "7.0.0"
  },
  "tagline" : "You Know, for Search"
}

 

----------------------------配置安全认证--------------------------------

1.生成证书
1.执行命令创建ca 执行:

su - elasticsearch
[elasticsearch@rac01 bin]$ cd /usr/local/services/elasticsearch/bin
[elasticsearch@master bin]$ ./elasticsearch-certutil ca
warning: ignoring JAVA_HOME=/usr/local/java/jdk1.8.0_351; using bundled JDK
This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL/TLS in the Elastic stack.

The 'ca' mode generates a new 'certificate authority'
This will create a new X.509 certificate and private key that can be used
to sign certificate when running in 'cert' mode.

Use the 'ca-dn' option if you wish to configure the 'distinguished name'
of the certificate authority

By default the 'ca' mode produces a single PKCS#12 output file which holds:
    * The CA certificate
    * The CA's private key

If you elect to generate PEM format certificates (the -pem option), then the output will
be a zip file containing individual files for the CA certificate and private key

Please enter the desired output file [elastic-stack-ca.p12]: ##直接回车
Enter password for elastic-stack-ca.p12 : ##直接回车

这个时候会生成elastic-stack-ca.p12文件
[elasticsearch@master elasticsearch]$ pwd
/usr/local/services/elasticsearch
[elasticsearch@master elasticsearch]$ ls -1
bin
config
elastic-stack-ca.p12
jdk
lib
LICENSE.txt
logs
modules
NOTICE.txt
plugins
README.asciidoc

 

然后按照提示输入Please enter the desired output file [elastic-stack-ca.p12] 此时提示输入文件名默认为:elastic-stack-ca.p12,输入完敲回车,或者直接回车默认。
接下来会提示输入Enter password for elastic-stack-ca.p12 :密码可以为空 直接回车 此时ca 创建OK 文件会在执行目录的根目录

 

2.根据elastic-stack-ca.p12文件 生成elastic-certificates.p12
执行命令为:elasticsearch-certutil cert --ca elastic-stack-ca.p12
一路回车即可
[elasticsearch@rac01 bin]$./elasticsearch-certutil cert --ca elastic-stack-ca.p12
Enter password for CA (elastic-stack-ca.p12) :
Please enter the desired output file [elastic-certificates.p12]:
Enter password for elastic-certificates.p12 :

接下来会提示 输入Enter password for CA (elastic-stack-ca.p12) :上一个ca 文件的密码 如果没有则直接回车即可,
接下来会提示Please enter the desired output file [elastic-certificates.p12]:给当前生成的文件取名默认为elastic-certificates.p12
接下来会提示给当前文件设置密码Enter password for elastic-certificates.p12 : 设置完成后回车。
至此我们有了elastic-stack-ca.p12和elastic-certificates.p12两个文件

 

将这两个文件拷贝到config目录下面
[elasticsearch@rac01 elasticsearch7]$ cd /usr/local/services/elasticsearch
[elasticsearch@rac01 elasticsearch7]$ mv elastic-certificates.p12 ./config/
[elasticsearch@rac01 elasticsearch7]$ mv elastic-stack-ca.p12 ./config/

 

3.将节点1上的两个文件拷贝到另外的节点
[elasticsearch@rac01 elasticsearch7]$ cd /usr/local/services/elasticsearch/config
[elasticsearch@rac01 elasticsearch7]$ scp elastic-certificates.p12 192.168.1.103:/usr/local/services/elasticsearch/config/
[elasticsearch@rac01 elasticsearch7]$ scp elastic-stack-ca.p12 192.168.1.103:/usr/local/services/elasticsearch/config/

[elasticsearch@rac01 elasticsearch7]$ scp elastic-certificates.p12 192.168.1.105:/usr/local/services/elasticsearch/config/
[elasticsearch@rac01 elasticsearch7]$ scp elastic-stack-ca.p12 192.168.1.105:/usr/local/services/elasticsearch/config/

 

4.修改配置文件
每台机器上的配置文件在最后面添加如下内容:
[root@rac01 middle]# su - elasticsearch
vi /usr/local/services/elasticsearch/config/elasticsearch.yml
添加如下配置项
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12

原来的如下项目可以去掉:
xpack.security.enabled: false

 

5.重新启动
将原来的进程杀掉后重新启动
kill 进程号

[root@rac01 middle]# su - elasticsearch
[elasticsearch@es ~]$ cd /usr/local/services/elasticsearch/bin
./elasticsearch -d

这个时候使用就需要密码访问了
curl 'http://192.168.1.102:19200/_cat/nodes?pretty'

 

6.设置密码
在其中一台机器上执行,我这里在 192.168.1.102 这台机器上执行,我这里密码全部设置为 elastic
[elasticsearch@rac01 bin]$ cd /usr/local/services/elasticsearch/bin
[elasticsearch@rac01 bin]$ ./elasticsearch-setup-passwords interactive
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]y

 

7.验证
curl -u elastic:elastic 'http://192.168.1.102:19200/_cat/nodes?v'
curl -u elastic:elastic 'http://192.168.1.102:19200/_cat/health?v'

 

8.数据验证
查看索引:
curl -u elastic:elastic -X GET 'http://192.168.1.102:19200/_cat/indices?v'

在节点1上创建索引和写入数据
es7之后以及没有type的概念,所有的type都是_doc表示,
curl -u elastic:elastic -XPUT 'http://192.168.1.102:19200/db_customer'
curl -u elastic:elastic -H "Content-Type: application/json" -XPUT 'http://192.168.1.102:19200/db_customer/_doc/1' -d '{"name": "huangxueliang"}'

查看数据
curl -u elastic:elastic -XGET 'http://192.168.1.102:19200/db_customer/_doc/1?pretty'

在其他的节点上查看该数据
curl -u elastic:elastic -XGET 'http://192.168.1.102:19200/db_customer/tb_test/1?pretty'
curl -u elastic:elastic -XGET 'http://192.168.1.102:19200/db_customer/tb_test/1?pretty'

 

######################部署kibana#################################
参考连接:
https://www.cnblogs.com/hxlasky/p/16541304.html

在其中一个节点安装即可,我这里是在节点1上安装

 

标签:tsl,elastic,##,ca,rac01,p12,elasticsearch,集群,es8.15
From: https://www.cnblogs.com/hxlasky/p/18413885

相关文章

  • K8s利用etcd定时备份集群结合钉钉机器人通知
    如何通过脚本的方式进行K8s集群的备份查看K8s中master节点中etcd集群的状态kubectlgetpods-nkube-system|grepetcd由于使用的etcd服务是K8s搭建时自身携带的,并不是独立搭建的etcd集群信息。使用K8s搭建集群时,etcd是Kubernetes集成的一个重要组件因此需要查看此K8s中etc......
  • redis集群的搭建
    一、创建节点文件夹因为集群至少要6个节点,所有创建6个文件夹mkdir700{0,1,2,3,4,5}二、复制配置文件到每个创建的文件夹cpredis.conf700{0,1,2,3,4,5}三、修改每个配置文件的端口等信息绑定服务器的IP:bind0.0.0.0或者注释掉#bind0.0.0.0关闭保护模式用于公网访......
  • kafka集群架构设计原理详解
    目录从Zookeeper数据理解Kafka集群工作机制Kafka的Zookeeper元数据梳理1、zookeeper整体数据2、ControllerBroker选举机制3、LeaderPartition选举机制4、LeaderPartition自动平衡机制5、Partition故障恢复机制6、HW一致性保障-Epoch更新机制7、总结从Zookeeper......
  • 第三十二节 kubeadm部署k8s 1.28.x高可用集群
    底层走docker底层走containerd容器操作系统:openEuler-24.03主机名:cat/etc/hosts主机3台192.168.80.54lyc-80-54master192.168.80.55lyc-80-55master192.168.80.56lyc-80-56master192.168.80.56lyc-80-57worker192.168.80.56lyc-80-58worker系统关闭selin......
  • Redis集群:构建高性能和高可用的键值存储系统
    引言Redis,即RemoteDictionaryServer,是一种开源的高性能键值数据库。它支持多种类型的数据结构,如字符串、哈希、列表、集合、有序集合等。随着业务的发展,单个Redis实例可能无法满足大规模数据存储和高并发访问的需求。Redis集群提供了一种解决方案,通过分布式存储和自动分片来......
  • 自动化工具ansible实战:一键部署k8s集群
    一、环境部署主机资源IP描述ansible2cpu、2G192.168.147.200ansiblek8s-master2cpu、2G192.168.147.210管理节点k8s-node12cpu、2G192.168.147.220node节点k8s-node22cpu、2G192.168.147.230node节点......
  • 保姆级,手把手教你物理机搭建Redis-sentinel(哨兵)集群
    集群介绍        Redis,作为一种开源的、基于内存的数据结构存储系统,被广泛应用于各种场景,包括缓存、消息队列、短期存储等。单一实例的工作模式通常无法保证Redis的可用性和拓展性,Redis提供了三种分布式方案:主从模式哨兵模式集群模式      主从模式    ......
  • Hadoop(七)集群搭建过程中遇到的问题及解决方法
    遇到的问题及解决方法1、Hadoop启动正常,但是进不了web端hadoop102:9870解决方法:查看自己的hosts文件(C:\Windows\System32\drivers\etc),发现没有配置相关网点,添加如下内容(不需要在前面加'#'):192.168.10.100hadoop100192.168.10.101hadoop101192.168.10.102hadoop102192.168.1......
  • Hadoop(六)生产集群搭建(三)
    完全分布式运行模式一、群起集群1、配置workers[user@hadoop102hadoop]$vim/opt/module/hadoop-3.1.3/etc/hadoop/workers在文件中添加如下内容:hadoop102hadoop103hadoop1042、启动集群(1)如果集群是第一次启动,需要在hadoop102节点格式化NameNode[user@hadoop102had......
  • minio分布式集群部署(三)
    一、优势分布式Minio可以让你将多块硬盘(甚至在不同的机器上)组成一个对象存储服务。由于硬盘分布在不同的节点上,分布式Minio避免了单点故障。分布式存储可靠性常用方法 分布式存储,很关键的点在于数据的可靠性,即保证数据的完整,不丢失,不损坏。只有在可靠性实现的前提下,才......