首页 > 其他分享 >es6.8.5集群部署(tsl认证)

es6.8.5集群部署(tsl认证)

时间:2024-07-17 09:09:10浏览次数:14  
标签:tsl http es6.8 elastic 19200 192.168 elasticsearch 集群 root

环境:
OS:Centos 7
es:6.8.5

节点1:192.168.1.101
节点2:192.168.1.104
节点3:192.168.1.105

 

######################################每个节点安装es#####################
1.创建中间件安装目录和数据文件、日志文件目录
[root@es soft]# mkdir -p /usr/local/services
[root@es soft]# mkdir -p /home/middle/elasticsearch/data
[root@es soft]# mkdir -p /home/middle/elasticsearch/logs


2.创建用户和用户组
groupadd -g 1500 elasticsearch
useradd -u 1500 -g elasticsearch elasticsearch
passwd elasticsearch


3.上传安装包到服务器并解压安装
解压缩并创建数据目录
[root@rac01 soft]# cd /soft
[root@rac01 soft]# tar -xvf elasticsearch-6.8.5.tar.gz
[root@rac01 soft]# mv elasticsearch-6.8.5 /usr/local/services/elasticsearch

 

4.将elasticsearch目录权限修改为elasticsearch
[root@es config]# cd /usr/local/services
[root@es services]# chown -R elasticsearch.elasticsearch ./elasticsearch

同时修改数据文件和日志文件目录给到elasticsearch
[root@es services]# cd /home/middle
[root@es middle]#chown -R elasticsearch.elasticsearch ./elasticsearch


5.创建备份目录
[root@rac01 home]#mkdir -p /home/middle/esbak
[root@rac01 home]#cd /home/middle
[root@rac01 home]#chown -R elasticsearch.elasticsearch ./esbak


6.修改配置文件
[root@rac01 middle]# su - elasticsearch
[elasticsearch@rac01 ~]$ cd /usr/local/services/elasticsearch/config
[elasticsearch@es config]$ vi elasticsearch.yml

cluster.name: escluster_hxl
node.name: node-101
path.data: /home/middle/elasticsearch/data
path.logs: /home/middle/elasticsearch/logs
network.host: 192.168.1.101
http.port: 19200
discovery.zen.ping.unicast.hosts: ["192.168.1.101", "192.168.1.104","192.168.1.105"]
discovery.zen.minimum_master_nodes: 2
path.repo: /home/middle/esbak
http.cors.enabled: true
http.cors.allow-origin: "*"


另外2个节点的配置文件只需要修改node.name和network.host
节点2:
node.name: node-104
network.host: 192.168.1.104

节点3:
node.name: node-105
network.host: 192.168.1.105


7.修改jvm参数(/usr/local/services/elasticsearch/config/jvm.options)
-Xms8g
-Xmx8g


9.修改/usr/local/services/elasticsearch/bin/elasticsearch
# ES_JAVA_OPTS="-Xms8g -Xmx8g" ./bin/elasticsearch
export ES_HEAP_SIZE=8g

 

10.系统参数设置
每个节点上都要执行,这里确保每台机器都能启动
[root@rac01 middle]# su - elasticsearch
[elasticsearch@rac01 ~]$ ulimit -Hn
65536

检查是否是65536,不是的话修改修改
/etc/security/limits.conf,该文件最后加入

* soft nofile 65536
* hard nofile 65536

报错的话:
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
在 /etc/sysctl.conf 文件最后添加一行
[root@localhost ~]#vm.max_map_count=262144
[root@localhost ~]#sysctl -p

 

11.关闭防火墙
systemctl status firewalld.service
systemctl stop firewalld.service
systemctl disable firewalld.service

 

12.安装java 1.8,否则启动的时候报错误
参考连接:
https://www.cnblogs.com/hxlasky/p/14775706.html

 

13.启动
3个节点都配置完成后再启动
每个节点上都要执行,这里确保每台机器都能启动
[root@rac01 middle]# su - elasticsearch
[elasticsearch@es ~]$ cd /usr/local/services/elasticsearch/bin
./elasticsearch -d

14.检查启动情况
curl 'http://192.168.1.101:19200/_cat/nodes?v'
curl http://192.168.1.104:19200/?pretty
curl http://192.168.1.105:19200/?pretty
curl -X GET 'http://192.168.1.101:19200/_cat/indices?v'

 

#####################################生成证书###############################

在其中一个节点上操作,我这里上在节点1上操作
1.执行命令创建ca 执行:

su - elasticsearch
[elasticsearch@rac01 bin]$ cd /usr/local/services/elasticsearch/bin
[elasticsearch@rac01 bin]$ ./elasticsearch-certutil ca
This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL/TLS in the Elastic stack.

The 'ca' mode generates a new 'certificate authority'
This will create a new X.509 certificate and private key that can be used
to sign certificate when running in 'cert' mode.

Use the 'ca-dn' option if you wish to configure the 'distinguished name'
of the certificate authority

By default the 'ca' mode produces a single PKCS#12 output file which holds:
* The CA certificate
* The CA's private key

If you elect to generate PEM format certificates (the -pem option), then the output will
be a zip file containing individual files for the CA certificate and private key

Please enter the desired output file [elastic-stack-ca.p12]: ##直接回车
Enter password for elastic-stack-ca.p12 : ##直接回车
su - elasticsearch
[elasticsearch@rac01 bin]$ cd /usr/local/services/elasticsearch/bin
[elasticsearch@rac01 bin]$ ./elasticsearch-certutil ca
This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL/TLS in the Elastic stack.

The 'ca' mode generates a new 'certificate authority'
This will create a new X.509 certificate and private key that can be used
to sign certificate when running in 'cert' mode.

Use the 'ca-dn' option if you wish to configure the 'distinguished name'
of the certificate authority

By default the 'ca' mode produces a single PKCS#12 output file which holds:
* The CA certificate
* The CA's private key

If you elect to generate PEM format certificates (the -pem option), then the output will
be a zip file containing individual files for the CA certificate and private key

Please enter the desired output file [elastic-stack-ca.p12]: ##直接回车
Enter password for elastic-stack-ca.p12 : ##直接回车

 

2.根据elastic-stack-ca.p12文件 生成elastic-certificates.p12
执行命令为:elasticsearch-certutil cert --ca elastic-stack-ca.p12

[elasticsearch@rac01 bin]$./elasticsearch-certutil cert --ca elastic-stack-ca.p12
Enter password for CA (elastic-stack-ca.p12) : ##直接回车
Please enter the desired output file [elastic-certificates.p12]: ##直接回车
Enter password for elastic-certificates.p12 : ##直接回车

 

将这两个文件拷贝到config目录下面
[elasticsearch@rac01 bin]$ mv elastic-stack-ca.p12 ../config/
[elasticsearch@rac01 bin]$ mv elastic-certificates.p12 ../config/

 

3.将节点1上的两个文件拷贝到另外的节点
[elasticsearch@rac01 bin]$ cd /usr/local/services/elasticsearch/config
[elasticsearch@rac01 config]$ scp elastic-certificates.p12 192.168.1.104:/usr/local/services/elasticsearch/config/
[elasticsearch@rac01 config]$ scp elastic-stack-ca.p12 192.168.1.104:/usr/local/services/elasticsearch/config/

[elasticsearch@rac01 config]$ scp elastic-certificates.p12 192.168.1.105:/usr/local/services/elasticsearch/config/
[elasticsearch@rac01 config]$ scp elastic-stack-ca.p12 192.168.1.105:/usr/local/services/elasticsearch/config/

 

 

4.修改配置文件
每台机器上的配置文件在最后面添加如下内容:

[root@rac01 middle]# su - elasticsearch
vi /usr/local/services/elasticsearch/config/elasticsearch.yml
添加如下配置项
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12

 

5.杀掉每个节点的es进程
kill -9 进程id

6.重新启动每个节点
su - elasticsearch
/usr/local/services/elasticsearch/bin/elasticsearch -d

 

这个时候使用就需要密码访问了

[elasticsearch@localhost config]$ curl 'http://192.168.1.101:19200/_cat/nodes?pretty'
{
  "error" : {
    "root_cause" : [
      {
        "type" : "security_exception",
        "reason" : "missing authentication token for REST request [/_cat/nodes?pretty]",
        "header" : {
          "WWW-Authenticate" : "Basic realm=\"security\" charset=\"UTF-8\""
        }
      }
    ],
    "type" : "security_exception",
    "reason" : "missing authentication token for REST request [/_cat/nodes?pretty]",
    "header" : {
      "WWW-Authenticate" : "Basic realm=\"security\" charset=\"UTF-8\""
    }
  },
  "status" : 401
}

 

7.设置密码

在其中一台机器上执行,我这里在节点1上执行 这台机器上执行,我这里密码全部设置为 elastic
[elasticsearch@rac01 bin]$ cd /usr/local/services/elasticsearch/bin
[elasticsearch@rac01 bin]$ ./elasticsearch-setup-passwords interactive
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]y

 

 

7.验证
curl -u elastic:elastic 'http://192.168.1.101:19200/_cat/nodes?v'
curl -u elastic:elastic 'http://192.168.1.104:19200/_cat/nodes?v'
curl -u elastic:elastic 'http://192.168.1.105:19200/_cat/nodes?v'
curl -u elastic:elastic 'http://192.168.1.101:19200/_cat/health?v'

 

8.数据验证
查看索引:
curl -u elastic:elastic -X GET 'http://192.168.1.101:19200/_cat/indices?v'

在节点1上创建索引和写入数据
curl -u elastic:elastic -XPUT 'http://192.168.1.101:19200/db_customer'
curl -u elastic:elastic -H "Content-Type: application/json" -XPUT 'http://192.168.1.101:19200/db_customer/tb_test/1' -d '{"name": "huangxueliang"}'

查看数据
curl -u elastic:elastic -XGET 'http://192.168.1.101:19200/db_customer/tb_test/1?pretty'

在其他的节点上查看该数据
curl -u elastic:elastic -XGET 'http://10.7.81.131:19200/db_customer/tb_test/1?pretty'
curl -u elastic:elastic -XGET 'http://10.7.81.132:19200/db_customer/tb_test/1?pretty'


删除索引
curl -u elastic:elastic -XDELETE 'http://10.7.81.131:19200/db_customer?pretty'

 

################################角色验证###########################

1.没有特殊指定,如下两个参数默认都是true
node.master: true
node.data: true

 

2.验证
节点1
node.master: true
node.data: false

重启该节点
kill 进程号
/usr/local/services/elasticsearch/bin/elasticsearch -d

尝试写入数据(发现是可以写入数据的):
curl -u elastic:elastic -XPUT 'http://192.168.1.101:19200/db_customer'
curl -u elastic:elastic -H "Content-Type: application/json" -XPUT 'http://192.168.1.101:19200/db_customer/tb_test/1' -d '{"name": "huangxueliang"}'


查看index
curl -u elastic:elastic -X GET 'http://192.168.1.101:19200/_cat/indices?v'
curl -u elastic:elastic -X GET 'http://192.168.1.104:19200/_cat/indices?v'
curl -u elastic:elastic -X GET 'http://192.168.1.105:19200/_cat/indices?v'

 

结论:该节点还是会接收数据增删改查的请求,但是不存储实际的数据.
netstat -an |grep 'ESTABLISHED' |grep -i '19200' |wc -l
curl -u elastic:elastic -X GET "http://192.168.1.101:19200/_tasks?pretty"
curl -u elastic:elastic -X GET "http://192.168.1.104:19200/_tasks?pretty"
curl -u elastic:elastic -X GET "http://192.168.1.105:19200/_tasks?pretty"

curl -u elastic:elastic -X GET "192.168.1.101:19200/_tasks/6m66YCRmTheTzR8CFyHVvg:9475?pretty"

 

########################配置备份####################################
--------服务端安装----------------
1.在做备份的机器上安装nfs 服务端
[root@rac01 ios]# yum install -y nfs-utils

 

2.配置输出
$ more /etc/exports
/home/middle/esbak 10.7.81.131(insecure,rw,no_root_squash,sync,anonuid=1500,anongid=1500)
/home/middle/esbak 10.7.81.132(insecure,rw,no_root_squash,sync,anonuid=1500,anongid=1500)

 

3.启动服务
先为rpcbind和nfs做开机启动:(必须先启动rpcbind服务)
[root@rac01 ios]# systemctl enable rpcbind.service
[root@rac01 ios]# systemctl enable nfs-server.service
然后分别启动rpcbind和nfs服务:
systemctl start rpcbind.service
systemctl start nfs-server.service

systemctl restart rpcbind.service
systemctl restart nfs-server.service

 

4.检查是否生效
配置生效
exportfs -r
exportfs

---------客户端安装---------
首先是安裝nfs,同上,然后启动rpcbind服务
[root@rac02 ios]# yum install -y nfs-utils

先为rpcbind做开机启动:
[root@rac02 ios]# systemctl enable rpcbind.service

然后启动rpcbind服务:
[root@rac02 ios]# systemctl start rpcbind.service
注意:客户端不需要启动nfs服务

检查 NFS 服务器端是否有目录共享:showmount -e nfs服务器的IP
showmount -e 192.168.1.101
Export list for 192.168.56.111:
/home/middle/esbak 192.168.56.113,192.168.56.112

mount到指定的目录
另外的2个节点执行如下命令:
mount -t nfs -o proto=tcp -o nolock 192.168.1.101:/home/middle/esbak /home/middle/esbak

使用 elasticsearch 用户看是否可以写入数据
[root@rac02 ios]# su - elasticsearch
[elasticsearch@rac02 esbak]$ cd /home/middle/esbak
[elasticsearch@rac02 esbak]$ echo "112">aa.txt

另外一台客户端
[elasticsearch@rac02 esbak]$ echo "113">bb.txt

这个时候在任何一个节点都会看到上面创建的两个文件,同时也可以进行编辑


开始备份,备份其中一个节点(在nfs服务那台机器上)执行即可

[root@rac01 ios]# su - elasticsearch

curl -u elastic:elastic -H "Content-Type: application/json" -XPUT http://192.168.1.101:19200/_snapshot/esbackup -d'{
"type": "fs",
"settings": {
"location": "/home/middle/esbak"
}
}'

##备份
curl -u elastic:elastic -H "Content-Type: application/json" -XPUT http://192.168.1.101:19200/_snapshot/esbackup/snapshot_20210520


查看备份设置
curl -u elastic:elastic -X GET "192.168.1.101:19200/_snapshot/esbackup?pretty"
查看所有的备份
curl -u elastic:elastic -X GET "192.168.1.101:19200/_snapshot/esbackup/_all?pretty"
curl -u elastic:elastic -X GET "192.168.1.101:19200/_snapshot/esbackup/_all?pretty"

删除快照
curl -u elastic:elastic -X DELETE "192.168.1.101:19200/_snapshot/esbackup/snapshot_20210520"

 

备份脚本
[yeemiao@yeemiao-elasticsearch-c099aef-prd ~]$ more /home/yeemiao/script/es_backup.sh
#!/bin/sh
now_date=`date "+%Y%m%d"`
delete_date=`date +%Y%m%d -d "1 days ago"`


##删除之前的备份
curl -H "Content-Type: application/json" -XDELETE "http://192.168.1.101:19200/_snapshot/esbackup/snapshot_$delete_date"

##创建备份仓库目录
curl -H "Content-Type: application/json" -XPUT http://192.168.1.101:19200/_snapshot/esbackup -d'{
"type": "fs",
"settings": {
"location": "/home/middle/esbak"
}
}'

##备份
curl -H "Content-Type: application/json" -XPUT http://192.168.1.101:19200/_snapshot/esbackup/snapshot_$now_date

 

 

[root@dbslave-010007081120 script]# more es_backup_tar.sh
#!/bin/bash
now_date=`date "+%Y%m%d"`
delete_date=`date +%Y%m%d -d "3 days ago"`

tar_file=/home/middle/esbak_tar/esbak_${now_date}.tar.gz
cd /home/middle

tar -czvf ${tar_file} ./esbak

##删除本地备份文件
delete_tar_file=/home/middle/esbak_tar/esbak_${delete_date}.tar.gz

##删除本地文件
if [ -f "${delete_tar_file}" ];then
rm ${delete_tar_file}
fi

 

标签:tsl,http,es6.8,elastic,19200,192.168,elasticsearch,集群,root
From: https://www.cnblogs.com/hxlasky/p/18306535

相关文章

  • redis学习-10(集群)
    数据部分rediscluster采用哈希分区规则,具体为虚拟槽分区,使用分散度好的哈希函数分到一个大范围的整数,每个节点负责一定数量的槽。slot=CRC16(key)&16383特点:解耦数据和节点之间的关系;节点自身维护槽的映射关系,不需要客户端和代理服务维护槽分区元数据;支持节点、槽、键之间的映......
  • MySQL数据库一主一从集群配置
    环境环境三部曲1.全新服务器-互相通信2.全新安装mysql8.0-分别安装3.配置域名解析这里来讲一主一从的第二种连接方式,第一种的话可以参考下面连接:第一种方式一主一从(M-S)(2)需求实验2与上一个实验需求基本相同。master1作为主mysqlmaster2作为从mysql。不同之......
  • 集群技术,一主一从的部署和原理方式
    集群概述所谓集群,就是将多台服务器集中在一起,同时处理用户对服务器的请求比如,我们现在开启的这一台mysql服务器,可以同时处理1000个用户的请求,那么我们开启两个这样的服务器,就可以同时处理2000数据库集群之间最重要的是数据一致性Replication集群图示:类型:MM-SM......
  • MySQL PXC集群多个节点同时大量并发update同一行
    如本文标题,MySQLPXC集群多个节点同时大量并发update同一行数据,会怎样?为此,本人做了一个测试,来验证到底会怎样!一、生成测试数据mysql>CREATETABLEtest(->`a`int(11)NOTNULLDEFAULT0,->`b`int(11)DEFAULTNULL,->`c`int(11)DEFAULTNULL,......
  • ubuntu20.04离线部署ceph集群
    版本兼容:查看ceph和系统的版本是否兼容节点说明ceph-admin:192.168.83.133ceph节点IPDomainHostnameServices192.168.83.133stor01.kb.cxceph01mon,mgr,mds192.168.83.134stor02.kb.cxceph02mgr,mon,mds192.168.83.135stor03.kb.cxceph03osd,m......
  • K8S教程:如何使用Kubeadm命令在PetaExpress Ubuntu系统上安装Kubernetes集群
    Kubernetes,通常缩写为K8s,是一个开源的容器编排平台,旨在自动化容器化应用的部署、扩展和管理。有了Kubernetes,您可以轻松地部署、更新和扩展应用,而无需担心底层基础设施。一个Kubernetes集群由控制平面节点(master节点)和工作节点(worker节点)组成。确保集群的高效运......
  • Windows节点加入K8S集群(K8S搭建Linux和Window混合集群)
    说明:K8S多数情况用于linux系统的集群,目前很少人实践linux和windows的混合集群。linux和windows的K8S混合集群,是以linux为Master节点,Windows为Node节点的。本示例linux采用centos7.6,windows采用windowsserver2019(均为虚拟机)。一、前提准备  1.熟悉linux的基本使......
  • 大规模GPU集群的进阶之路
    大家好,我是卢旗。今天来聊聊GPU。GPU,全称GraphicProcessingUnit,即图形处理器。它的并行处理能力非常强大,能够同时处理多个任务和数据,因此被广泛用于图形渲染、视频处理、深度学习、科学计算等领域。研发团队在负责制定硬件选型策略并设计优化下一代大规模GPU集群的软硬件架......
  • Redis集群模式
    一、Redis集群方式主从模式、哨兵模式、集群模式(cluster)主从复制:解决数据备份,读写分离。但是无法实现自动化故障转移,无法对master进行扩容。哨兵模式:实现自动化故障恢复。在读写分离下,单节点导致服务不可用。集群模式:解决负载均衡以及存储问题。模式版本优点缺点......
  • NDB 集群架构
    和mongo分片集群架构类似 集群节点有三种类型管理节点:此类节点的作用是管理NDB群集内的其他节点,执行提供配置数据、启动和停止节点以及运行备份等功能。由于此节点类型管理其他节点的配置,因此应先启动此类型的节点,然后再启动任何其他节点。管理节点使用命令ndb_mgmd启动。......