首页 > 其他分享 >week8

week8

时间:2022-12-10 17:33:18浏览次数:58  
标签:10.0 6379 redis cluster week8 master sentinel

完成作业:

\1. redis搭建哨兵原理和集群实现。

\2. LVS常用模型工作原理,及实现。

\3. LVS的负载策略有哪些,各应用在什么场景,通过LVS DR任意实现1-2种场景。

\4. web http协议通信过程,相关技术术语总结。

\5. 总结网络IO模型和nginx架构。

\6. nginx总结核心配置和优化。

\7. 使用脚本完成一键编译安装nginx任意版本。

\8. 任意编译一个第3方nginx模块,并使用。

1. redis搭建哨兵原理和集群实现。

redis哨兵机制实现(sentinel)

类似MySQL的MHA功能,可以自动选主节点,实现自动故障转移failover,配合主从复制来实现,实现监控多个主从集群的心跳,故障转移

写操作:主节点

读操作:从节点,开启read-only参数

sentinel获取redis-server端的地址

sentinel作用:部署多个sentinel作为监控节点,内部也是有类似MHA的选举机制

1.类似MHA的故障转移管理节点,需要专门的服务器部署sentinel

2.代理节点,类似mycat的统一代理节点

3.客户端程序直接连接sentinel的节点IP工程

imgimg

故障转移流程:客户端直接连接sentinel节点,类似直连mycat

1.多个sentinel认为master节点有问题,可能通过心跳

2.sentinel选出slave作为master

3.其余的slave作为新的slave

4.客户端主从变化

5.修复坏的master

一般的分布式架构,类似ZK,sentinel,minio等节点部署,都是奇数个节点,且是大于等于3个节点的,有利于选举

集群的脑裂现象:偶数个:2个、4个的话,会出现选主平等的现象,无法决策出新的主

sentinel内部的信息交换与健康检查

定时Ping等,sentinel节点之间会互相检查等

img

哨兵集群的部署

一般来说,哨兵软件是调用redis-sentinel的二进制执行文件,而这个文件的link恰好就是redis-server,其实本质上还是调用redis-server

[root@rocky ~]#ll /apps/redis/bin/
总用量 24904
-rwxr-xr-x. 1 redis redis  6550296 10月  5 21:02 redis-benchmark
lrwxrwxrwx. 1 redis redis       12 10月  5 21:02 redis-check-aof -> redis-server
lrwxrwxrwx. 1 redis redis       12 10月  5 21:02 redis-check-rdb -> redis-server
-rwxr-xr-x. 1 redis redis  6766680 10月  5 21:02 redis-cli
lrwxrwxrwx. 1 redis redis       12 10月  5 21:02 redis-sentinel -> redis-server
-rwxr-xr-x. 1 redis redis 12176040 10月  5 21:02 redis-server

部署方式:同一台机器上,同时还是sentinel节点

最好是requirepass要一致,因为会触发sentinel机制选主

##redis.conf密码一致
requirepass=123456

masterauth=123456
replicaof 10.0.0.132 6379

##db数量一致
databases 20

127.0.0.1:6379> info replication
# Replication
role:master
connected_slaves:2
slave0:ip=10.0.0.128,port=6379,state=online,offset=25494,lag=1
slave1:ip=10.0.0.129,port=6379,state=online,offset=25494,lag=0
master_failover_state:no-failover
master_replid:a1ff530e12d9991340e8e46b6051009254dbffe4
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:25494
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:25494

127.0.0.1:6379> dbsize
(integer) 209

grep requirepass redis.conf

##密码一致,包括master节点也要,因为有可能称为slave
[root@slave1 etc]#grep 123456 redis.conf
masterauth 123456
requirepass 123456

echo "masterauth 123456" >> redis.conf
systemctl restart redis

img

哨兵机制配置sentinel.conf

新建一个sentinel.conf文件,配置文件需要和目录名一致

复制到别的节点后,需要修改属主和属组,chown -R redis.redis /apps/redis,有报错的话就看日志,tail -f /apps/redis/log/sentinel.log

vim /usr/local/src/redis-6.2.7/sentinel.conf

##主节点操作
cp /usr/local/src/redis-6.2.7/sentinel.conf /apps/redis/etc/

##修改权限,修改整个目录的权限,很重要
chown redis.redis sentinel.conf

##
chown redis. -R /apps/redis

bind 0.0.0.0 ##开启监听,这个sentinel服务默认不开的
port 26379 ##端口
daemonsize yes ##后台执行
pidfile "redis-sentinel.pid"
logfile "/apps/redis/log/sentinel.log"
dir "/apps/redis/data" ##工作目录

##主节点IP+redis服务+有几个节点认为挂了就是挂了,集群名称
sentinel monitor mymaster 10.0.0.132 6379 2
sentinel auth-pass mymaster 123456

##集群master密码
sentinel auth-pass mymaster 123456

##默认是master在30s后无响应,则认为是down掉了,可以配置为3s
sentinel down-after-milliseconds mymaster 30000
sentinel down-after-milliseconds mymaster 3000

##其余默认即可
##重新选主后可以同时同步的节点数:1个,即最多有一个slave和新主进行同步,避免造成大量的流量
sentinel parallel-syncs mymaster 1

##rsync保存属性,复制到其他两个从节点上
for i in 128 129;do scp -p  /apps/redis/etc/sentinel.conf 10.0.0.$i:/apps/redis/etc/;done

chown redis.redis /apps/redis/etc/sentinel.conf

redis-sentinel的service文件

调用redis-sentinel的二进制执行文件,有报错就看日志,之前尝试过fail start的

cat > /lib/systemd/system/redis-sentinel.service <<EOF
[Unit]
Description=Redis sentinel
After=network.target

[Service]
ExecStart=/apps/redis/bin/redis-sentinel /apps/redis/etc/sentinel.conf --supervised systemd
ExecStop=/bin/kill -s QUIT \$MAINPID
Type=notify
User=redis
Group=redis
RuntimeDirectory=redis
RuntimeDirectoryMode=0755

[Install]
WantedBy=multi-user.target
EOF

##启动哨兵服务
systemctl daemon-reload
systemctl restart redis-sentinel.service
systemctl enable --now redis-sentinel.service

systemctl disable --now redis_6380.service

[root@rocky redis]#ss -ntlp | grep redis
LISTEN 0      511          0.0.0.0:26379      0.0.0.0:*    users:(("redis-sentinel",pid=6143,fd=6))                 
LISTEN 0      511          0.0.0.0:6379       0.0.0.0:*    users:(("redis-server",pid=5904,fd=6))                   
LISTEN 0      511            [::1]:6379          [::]:*    users:(("redis-server",pid=5904,fd=7)) 

##rsync保存属性,复制到其他两个从节点上
for i in 128 129;do scp -p /usr/lib/systemd/system/redis-sentinel.service 10.0.0.$i:/usr/lib/systemd/system/;done

查看sentinel服务的状态,sentinel=3才正常,因为有3个sentinel节点

info sentinel

ss -ntlp | grep 26379
[root@slave1 etc]#ss -ntlp | grep 6379
LISTEN     0      511          *:26379                    *:*                   users:(("redis-sentinel",pid=67801,fd=6))
LISTEN     0      511          *:6379                     *:*                   users:(("redis-server",pid=35048,fd=6))
LISTEN     0      511        ::1:6379                    :::*                   users:(("redis-server",pid=35048,fd=7))

##手动执行
/apps/redis/bin/redis-sentinel /apps/redis/etc/sentinel.conf

##连接sentinel
##sentinel的状态只有1个,有点问题
[root@rocky ~]#redis-cli -p 26379
127.0.0.1:26379> info sentinel
# Sentinel
sentinel_masters:1
sentinel_tilt:0
sentinel_running_scripts:0
sentinel_scripts_queue_length:0
sentinel_simulate_failure_flags:0
master0:name=mymaster,status=ok,address=10.0.0.132:6379,slaves=2,sentinels=1

#查看slave,原来是myid相同,主节点起了之后才复制过来的,重新复制sentinel.conf模板文件
cp /usr/local/src/redis-6.2.7/sentinel.conf /apps/redis/etc/

systemctl restart redis-sentinel

看到myid不同才行
protected-mode no
supervised systemd
user default on nopass ~* &* +@all
sentinel myid 7bf113db0a86b005517e5695ca845a91fcf92236
sentinel config-epoch mymaster 0
sentinel leader-epoch mymaster 0
sentinel current-epoch 0
sentinel known-replica mymaster 10.0.0.129 6379
sentinel known-replica mymaster 10.0.0.128 6379
sentinel known-sentinel mymaster 10.0.0.132 26379 f617e11afbdc62e7079af265aa7f5b0de1448316

protected-mode no
supervised systemd
maxclients 4064
user default on nopass ~* &* +@all
sentinel myid eeedca70986055851c7bb6ffedb66408968ebff8
sentinel config-epoch mymaster 0
sentinel leader-epoch mymaster 0
sentinel current-epoch 0
sentinel known-replica mymaster 10.0.0.129 6379
sentinel known-replica mymaster 10.0.0.128 6379
sentinel known-sentinel mymaster 10.0.0.132 26379 f617e11afbdc62e7079af265aa7f5b0de1448316
sentinel known-sentinel mymaster 10.0.0.128 26379 7bf113db0a86b005517e5695ca845a91fcf92236

127.0.0.1:26379> info sentinel
# Sentinel
sentinel_masters:1
sentinel_tilt:0
sentinel_running_scripts:0
sentinel_scripts_queue_length:0
sentinel_simulate_failure_flags:0
master0:name=mymaster,status=ok,address=10.0.0.132:6379,slaves=2,sentinels=3 --->sentinels为3才行

通过sentinel实现故障转移

一定要在sentinel监听到redis节点有故障挂了才行,sentinel内会自动选举

PS:主节点服务挂了,踢出sentinel集群后,如果再重启机器,会重新加载配置文件加入到集群中,则新master会重新拷贝RDB文件到从节点,会变得非常卡机器,老的master修复好服务后就启动成新节点了

会自动修改redis.conf,指向新的主

复制风暴:如果有很多记录的话,则会复制全量到修复好的slave节点,从而造成CPU非常高

##停止master节点,看看会不会起新的master,踢出集群
systemctl stop redis
tail -f /apps/redis/log/sentinel.log
15621:X 26 Oct 2022 21:38:44.781 * +sentinel-address-switch master mymaster 10.0.0.132 6379 ip 10.0.0.132 port 26379 for f617e11afbdc62e7079af265aa7f5b0de1448316
15621:X 26 Oct 2022 21:38:47.574 * +sentinel sentinel eeedca70986055851c7bb6ffedb66408968ebff8 10.0.0.129 26379 @ mymaster 10.0.0.132 6379

或者是
sentinel failover测试环境

##sentinel内选举新主,129机器的sentinel日志也显示了
15621:X 26 Oct 2022 22:13:08.576 # +vote-for-leader 7bf113db0a86b005517e5695ca845a91fcf92236 1

##128变为新主
15621:X 26 Oct 2022 22:13:10.481 # +switch-master mymaster 10.0.0.132 6379 10.0.0.128 6379

##128机器查看,变为主了,同时会自动修改redis.conf文件,指向128(redis主节点)
[root@master ~]#redis-cli -a 123456
127.0.0.1:6379> info replication
# Replication
role:master
connected_slaves:1
slave0:ip=10.0.0.129,port=6379,state=online,offset=724645,lag=0
master_failover_state:no-failover
master_replid:092d02084ede344500d29ba82cc42de4a13071c2
master_replid2:ade1fc5d85f404277aced0fd10e05d96ef104f9b
master_repl_offset:724915
second_repl_offset:696752
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:159
repl_backlog_histlen:724757

129应该是备
127.0.0.1:6379> info replication
# Replication
role:slave
master_host:10.0.0.128


##重启132节点的redis服务
127.0.0.1:6379> dbsize
(integer) 209

127.0.0.1:6379> info replication
# Replication
role:slave
master_host:10.0.0.128
master_port:6379


##节点优先级
replica-priority 100 --->值越小优先级越高

python写sentinel节点

redis主从集群部署了sentinel后,连接不能是redis的master节点IP(写死IP)了,需要连接到sentinel的节点

##role查看角色
127.0.0.1:6379> role
1) "slave"
2) "10.0.0.128"

##128故障切换,配置redis的priority 50
replica-priority 50
systemctl restart redis

##切换,需要加上redis-sentinel的集群名称:mymaster,已经切换到132节点了
127.0.0.1:26379> sentinel failover mymaster
OK

15621:X 26 Oct 2022 23:19:36.916 # +switch-master mymaster 10.0.0.128 6379 10.0.0.132 6379
15621:X 26 Oct 2022 23:19:36.916 * +slave slave 10.0.0.129:6379 10.0.0.129 6379 @ mymaster 10.0.0.132 6379
15621:X 26 Oct 2022 23:19:36.916 * +slave slave 10.0.0.128:6379 10.0.0.128 6379 @ mymaster 10.0.0.132 6379

127.0.0.1:6379> role
1) "master"
2) (integer) 1516302
3) 1) 1) "10.0.0.129"

连接的python脚本(自动判断主从redis节点),导入sentinel库来判断,写sentinel的端口号

直接写sentinel的地址+端口号,通过sentinel绑定了redis的主从节点,自动判断redis内的节点,其实get方法是直接连mymaster(sentinel集群的名称)

#!/usr/bin/python3
import redis
from redis.sentinel import Sentinel

#连接哨兵服务器(主机名也可以用域名),写一个数组赋值,不需要认为去判断那个redis为主
sentinel = Sentinel([('10.0.0.128', 26379),
                     ('10.0.0.129', 26379),
                     ('10.0.0.132', 26379)
                    ],
                    socket_timeout=0.5)

##redis认证密码
redis_auth_pass='123456'

#mymaster 是运维人员配置哨兵模式的数据库名称,实际名称按照个人部署案例来填写
#获取主服务器地址
master = sentinel.discover_master('mymaster')
print(master)


#获取从服务器地址
slave = sentinel.discover_slaves('mymaster')
print(slave)

##两种方法分别是master节点的get、set和slave节点的get
#获取主服务器进行写入
master = sentinel.master_for('mymaster', socket_timeout=0.5, password=redis_auth_pass, db=0)
w_ret = master.set('name', 'wang')
#输出:True

#获取从服务器进行读取(默认是round-roubin)
slave = sentinel.slave_for('mymaster', socket_timeout=0.5, password=redis_auth_pass, db=0)
r_ret = slave.get('name')
print(r_ret)
#输出:wang

-----
##执行python脚本,打印出主节点的IP和从节点的IP,调用方法即可
132:redis主节点
129:redis从节点
打印出wang,是name的值
[root@rocky script]#./sentinel_test.py 
('10.0.0.132', 6379)
[('10.0.0.129', 6379), ('10.0.0.128', 6379)]
b'wang'

redis集群cluster的实现

redis无论是配置了sentinel进行故障转移,始终还是redis单节点完成写操作,还是有性能瓶颈,分部署redis方案;

redis的搭建模式

1.普通的主从模式,1主多从

2.redis集群模式,cluster模式,3个节点都是master节点,各自有从节点,至少需要3节点

img

cluster实现原理:将三个master节点分成16384个槽位,然后/3来放置,对来的key取模,得到哪个就放在哪个对应的机器上面

分布式的存放数据,实现高可用

1.解决单点故障的问题

2.主从架构的话,所有的数据都是存放在主节点,由主节点同步到从节点,数据始终是只有一份的,现在部署cluster的话,则数据存放是多份的

3.多个master同时工作,虽然都是单线程接受请求,但是性能上也会提升很多

4.slot只是一个槽位,但是不影响他放数据,数据还是存放在内存中,根据rbd文件来实现数据的持久化,数据可以放N多个,dbsize来看

5.redis数据使用量:看rdb文件,如果是自建的redis,则占用磁盘用量;网络上购买的redis,则使用内存的数据大小,都是缓存,buffer,读数据才快

程序对接:需要写上cluster所有节点的IP地址,包括3个master和N个slave节点

cluster实现逻辑

key---CRC算法---在对16384取模---得到一个值就放在对应的redis节点上

不同的Node槽位号不同

img

cluster集群部署最佳实践

测试环境:可以部署三节点,每个节点起两个实例,同时主从关系错开部署,避免主从节点在同一台机器上

生产环境:建议部署6台redis机器,做主从,方便容灾,故障转移failover

分片数:指的是节点中的对于所有槽位的分片吧

img

部署cluster集群

xshell快速命令栏---撰写窗格---全部会话---执行

##修改基本参数
bind 0.0.0.0
masterauth=密码
requirepass=密码

启用集群、启用集群配置文件(记录cluster集群的主从关系等)、高可用关系
# cluster-enabled yes
# cluster-config-file nodes-6379.conf
# cluster-require-full-coverage yes

sed -i 's/#cluster-enabled yes/cluster-enabled yes/' /apps/redis/etc/redis.conf
sed -i -e 's/# cluster-enabled yes/cluster-enabled yes/' -e 's/# cluster-config-file nodes-6379.conf/cluster-config-file nodes-6379.conf/' -e 's/# cluster-require-full-coverage yes/cluster-require-full-coverage yes/' redis.conf

sed -i 's/cluster-enabled yes/#cluster-enabled yes/' /apps/redis/etc/redis.conf
sed -i 's/cluster-enabled yes/#cluster-enabled yes/' /apps/redis/etc/redis_6380.conf


##重启服务
systemctl restart redis
systemctl restart redis_6380

端口号为16379--->cluster
26379--->redis-sentinel哨兵

[root@rocky data]#ss -ntlp | grep redis
LISTEN 0      511          0.0.0.0:6379       0.0.0.0:*    users:(("redis-server",pid=5831,fd=6))                  
LISTEN 0      511          0.0.0.0:26379      0.0.0.0:*    users:(("redis-sentinel",pid=1032,fd=6))                
LISTEN 0      511          0.0.0.0:16379      0.0.0.0:*    users:(("redis-server",pid=5831,fd=10))                 
LISTEN 0      511            [::1]:6379          [::]:*    users:(("redis-server",pid=5831,fd=7))                  
LISTEN 0      511            [::1]:16379         [::]:*    users:(("redis-server",pid=5831,fd=11)) 

##报错,还没规定槽位slot
127.0.0.1:6379> get key
(error) CLUSTERDOWN Hash slot not served

##关于集群配置.conf,记录集群信息
cat /apps/redis/data
[root@rocky data]#cat nodes-6379.conf 
fdb7520fa4f0506e5b27019a880d1a8efa70dd9b :0@0 myself,master - 0 0 0 connected
vars currentEpoch 0 lastVoteEpoch 0

报错:无法启动redis服务

tail -f ../log/redis-6379.log

63581:M 28 Oct 2022 10:35:24.594 # You can't have keys in a DB different than DB 0 when in Cluster mode. Exiting.
##解决:删除掉rdb文件,重新生成,RDB文件不能在cluster模式下使用

ll /apps/redis/data
rm -rf redis.rdb

执行部署cluster集群,最好在主节点上运行吧,一定要6个节点=6个服务

PS:requirepass和masterauth要一致密码

前3个为master,后3个为slave节点,自动分配主从关系的

##创建集群,
redis-cli -a 123456 --cluster create 10.0.0.132:6379 10.0.0.128:6379 10.0.0.128:6379 10.0.0.132:6380 10.0.0.128:6380 10.0.0.128:6380 --cluster-replicas 1
cat nodes-6379.conf

[root@rocky data]#redis-cli -a 123456 --cluster create 10.0.0.132:6379 10.0.0.128:6379 10.0.0.128:6379 --cluster-replicas 1
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
*** ERROR: Invalid configuration for cluster creation.
*** Redis Cluster requires at least 3 master nodes.
*** This is not possible with 3 nodes and 1 replicas per node.
*** At least 6 nodes are required.

##部署slave节点,全文替换,包括port监听端口,log日志
pidfile "/apps/redis/run/redis_6379.pid"
logfile "/apps/redis/log/redis-6379.log"
port 6379

cp redis.conf redis_6380.conf 
cp redis_6379.log redis_6380.log
cp /lib/systemd/system/redis.service /lib/systemd/system/redis_6380.service
sed -i 's/6379/6380/g' /apps/redis/etc/redis_6380.conf 
sed -i 's/6379/6380/g' /lib/systemd/system/redis_6380.service

##复制到从节点
for i in 128 129;do scp -p /apps/redis/etc/redis_6380.conf 10.0.0.$i:/apps/redis/etc/;done
for i in 128 129;do scp -p /lib/systemd/system/redis_6380.service 10.0.0.$i:/lib/systemd/system/;done

##修改权限,服务启动
chown redis.redis -R /apps/redis
systemctl enable --now redis_6380

[root@rocky system]#ss -ntlp | grep 6380
LISTEN 0      511          0.0.0.0:6380       0.0.0.0:*    users:(("redis-server",pid=7266,fd=6))                   
LISTEN 0      511          0.0.0.0:16380      0.0.0.0:*    users:(("redis-server",pid=7266,fd=10))                  
LISTEN 0      511            [::1]:6380          [::]:*    users:(("redis-server",pid=7266,fd=7))                   
LISTEN 0      511            [::1]:16380         [::]:*    users:(("redis-server",pid=7266,fd=11)) 

cluster服务fail,和哨兵sentinel冲突了,杀掉哨兵

发现一定要6个节点,不然一定会失败,跟sentinel哨兵服务冲突了,杀掉

##停止哨兵模式sentinel服务
systemctl disable --now redis-sentinel.service
ps aux | grep sentinel
ss -ntlp | grep 63
[root@rocky data]#ss -ntlp | grep 63
LISTEN 0      511          0.0.0.0:6379       0.0.0.0:*    users:(("redis-server",pid=1052,fd=6))                  
LISTEN 0      511          0.0.0.0:6380       0.0.0.0:*    users:(("redis-server",pid=2204,fd=6))                  
LISTEN 0      5          127.0.0.1:631        0.0.0.0:*    users:(("cupsd",pid=1066,fd=10))                        
LISTEN 0      511          0.0.0.0:16379      0.0.0.0:*    users:(("redis-server",pid=1052,fd=10))                 
LISTEN 0      511          0.0.0.0:16380      0.0.0.0:*    users:(("redis-server",pid=2204,fd=10))                 
LISTEN 0      511            [::1]:6379          [::]:*    users:(("redis-server",pid=1052,fd=7))                  
LISTEN 0      511            [::1]:6380          [::]:*    users:(("redis-server",pid=2204,fd=7))                  
LISTEN 0      5              [::1]:631           [::]:*    users:(("cupsd",pid=1066,fd=9))                         
LISTEN 0      511            [::1]:16379         [::]:*    users:(("redis-server",pid=1052,fd=11))                 
LISTEN 0      511            [::1]:16380         [::]:*    users:(("redis-server",pid=2204,fd=11))    

---------------------------------------------------------------------------------------------------------------------
##部署

[root@rocky ~]#redis-cli -a 123456 --cluster create 10.0.0.132:6379 10.0.0.128:6379 10.0.0.128:6379 10.0.0.132:6380 10.0.0.128:6380 10.0.0.128:6380 --cluster-replicas 1

Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 10.0.0.128:6380 to 10.0.0.132:6379
Adding replica 10.0.0.128:6380 to 10.0.0.128:6379
Adding replica 10.0.0.128:6379 to 10.0.0.132:6380

>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: fdb7520fa4f0506e5b27019a880d1a8efa70dd9b 10.0.0.132:6379
   slots:[0-5460] (5461 slots) master
M: 7746303b83d2943e2e6edac5fd3bdaed37f419b8 10.0.0.128:6379
   slots:[5461-10922] (5462 slots) master
S: 7746303b83d2943e2e6edac5fd3bdaed37f419b8 10.0.0.128:6379
   replicates fdb7520fa4f0506e5b27019a880d1a8efa70dd9b
M: 0e12c385b246460651ea3c1b247f20e029da2a28 10.0.0.132:6380
   slots:[10923-16383] (5461 slots) master
S: 993a95fb5e5454e48050d997872ca57e1fe703d6 10.0.0.128:6380
   replicates 7746303b83d2943e2e6edac5fd3bdaed37f419b8
S: 993a95fb5e5454e48050d997872ca57e1fe703d6 10.0.0.128:6380
   replicates 0e12c385b246460651ea3c1b247f20e029da2a28

Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Failed to send CLUSTER MEET command.

--------------------------------------------------------------------------------------------------------------------

[root@rocky data]#redis-cli -a 123456 --cluster create 10.0.0.132:6379 10.0.0.128:6379 10.0.0.128:6379 10.0.0.132:6380 10.0.0.128:6380 10.0.0.128:6380 --cluster-replicas 1
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 10.0.0.128:6380 to 10.0.0.132:6379
Adding replica 10.0.0.128:6380 to 10.0.0.128:6379
Adding replica 10.0.0.128:6379 to 10.0.0.132:6380
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: fdb7520fa4f0506e5b27019a880d1a8efa70dd9b 10.0.0.132:6379
   slots:[0-5460] (5461 slots) master
M: 7746303b83d2943e2e6edac5fd3bdaed37f419b8 10.0.0.128:6379
   slots:[5461-10922] (5462 slots) master
S: 7746303b83d2943e2e6edac5fd3bdaed37f419b8 10.0.0.128:6379
   replicates 7746303b83d2943e2e6edac5fd3bdaed37f419b8
M: 0e12c385b246460651ea3c1b247f20e029da2a28 10.0.0.132:6380
   slots:[10923-16383] (5461 slots) master
S: 993a95fb5e5454e48050d997872ca57e1fe703d6 10.0.0.128:6380
   replicates 0e12c385b246460651ea3c1b247f20e029da2a28
S: 993a95fb5e5454e48050d997872ca57e1fe703d6 10.0.0.128:6380
   replicates fdb7520fa4f0506e5b27019a880d1a8efa70dd9b
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join

>>> Performing Cluster Check (using node 10.0.0.132:6379)
M: fdb7520fa4f0506e5b27019a880d1a8efa70dd9b 10.0.0.132:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: 7746303b83d2943e2e6edac5fd3bdaed37f419b8 10.0.0.128:6379
   slots:[5461-10922] (5462 slots) master
S: 993a95fb5e5454e48050d997872ca57e1fe703d6 10.0.0.128:6380
   slots: (0 slots) slave
   replicates fdb7520fa4f0506e5b27019a880d1a8efa70dd9b
M: 0e12c385b246460651ea3c1b247f20e029da2a28 10.0.0.132:6380
   slots:[10923-16383] (5461 slots) master
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

##重建集群
redis-cli -a 123456 --cluster create 10.0.0.132:6379 10.0.0.128:6379 10.0.0.129:6379 10.0.0.132:6380 10.0.0.128:6380 10.0.0.129:6380 --cluster-replicas 1

img

重建cluster集群

1.发现原集群的配置有点问题,已经无法修改

2.删除掉data/node.conf,每个端口的都要删除,rm -rf /apps/redis/data/node*

3.重新启动redis服务,每个端口都要重启

systemctl restart redis

systemctl restart redis_6380

4.重新create集群

5.验证集群

在集群模式下运行命令:-c

[root@rocky data]#redis-cli -a 123456 cluster nodes
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
7746303b83d2943e2e6edac5fd3bdaed37f419b8 10.0.0.128:6379@16379 master - 0 1666940261773 2 connected 5461-10922
0e12c385b246460651ea3c1b247f20e029da2a28 10.0.0.132:6380@16380 master - 0 1666940260763 4 connected 10923-16383
fdb7520fa4f0506e5b27019a880d1a8efa70dd9b 10.0.0.132:6379@16379 myself,master - 0 1666940258000 1 connected 0-5460


##redis集群基本操作
[root@rocky data]#redis-cli -a 123456 cluster nodes
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
0e12c385b246460651ea3c1b247f20e029da2a28 10.0.0.132:6380@16380 master - 0 1666946147938 4 connected 10923-16383
7746303b83d2943e2e6edac5fd3bdaed37f419b8 10.0.0.128:6379@16379 master - 0 1666946148946 2 connected 5461-10922
fdb7520fa4f0506e5b27019a880d1a8efa70dd9b 10.0.0.132:6379@16379 myself,master - 0 1666946146000 1 connected 0-5460

redis-cli -a 123456 --cluster create 10.0.0.132:6379 10.0.0.128:6379 10.0.0.129:6379 10.0.0.132:6380 10.0.0.128:6380 10.0.0.129:6380 --cluster-replicas 1

redis-cli -a 123456 --cluster reshard 10.0.0.132:6380 
redis-cli -a 123456 --cluster del-node 10.0.0.132:6380 0e12c385b246460651ea3c1b247f20e029da2a28

rm -rf /apps/redis/data/node*
systemctl restart redis
systemctl restart redis_6380

[root@rocky data]#redis-cli -a 123456 --cluster create 10.0.0.132:6379 10.0.0.128:6379 10.0.0.129:6379 10.0.0.132:6380 10.0.0.128:6380 10.0.0.129:6380 --cluster-replicas 1
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 10.0.0.128:6380 to 10.0.0.132:6379
Adding replica 10.0.0.129:6380 to 10.0.0.128:6379
Adding replica 10.0.0.132:6380 to 10.0.0.129:6379

查看cluster集群的状态

阿里云redis:直接直连到redis的域名

##主从关系,所有6379都是master节点
Adding replica 10.0.0.128:6380 to 10.0.0.132:6379
Adding replica 10.0.0.129:6380 to 10.0.0.128:6379
Adding replica 10.0.0.132:6380 to 10.0.0.129:6379

master主节点,slave从节点,myself当前节点
##查看集群node状态
[root@rocky data]#redis-cli -a 123456 cluster nodes
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
9cfa2df2729dc59ceb1a8cc087643dccadea4369 10.0.0.128:6379@16379 master - 0 1666947107569 2 connected 5461-10922
208f0bbc3d6f2297f8700f2a6c024da86a918938 10.0.0.132:6379@16379 myself,master - 0 1666947105000 1 connected 0-5460
ed9353a0af64b76a0681a152db87f430f8ac8849 10.0.0.128:6380@16380 slave 208f0bbc3d6f2297f8700f2a6c024da86a918938 0 1666947107000 1 connected
ca80dc10f4581d0e051a9e23aeffa8426cc32166 10.0.0.132:6380@16380 slave 8845e2d632180388c415959bfe85b50eee7f3b10 0 1666947108000 3 connected
0ba024bd1b2bd91d16c38f90e5d03b75f73b393f 10.0.0.129:6380@16380 slave 9cfa2df2729dc59ceb1a8cc087643dccadea4369 0 1666947108579 2 connected
8845e2d632180388c415959bfe85b50eee7f3b10 10.0.0.129:6379@16379 master - 0 1666947107000 3 connected 10923-16383


##查看单个节点的主从状态
redis-cli -a 123456 -c info replication
[root@rocky data]#redis-cli -a 123456 -c info replication
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
# Replication
role:master
connected_slaves:1
slave0:ip=10.0.0.128,port=6380,state=online,offset=364,lag=0

每一个主节点master都有一个从节点slave


##查看集群状态info
[root@rocky data]#redis-cli -a 123456 cluster info 
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6 ##6节点,包括3主3从
cluster_size:3 ##集群规模:3个
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:362
cluster_stats_messages_pong_sent:341
cluster_stats_messages_sent:703
cluster_stats_messages_ping_received:336
cluster_stats_messages_pong_received:362
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:703

[root@rocky data]#cat nodes-6379.conf 
9cfa2df2729dc59ceb1a8cc087643dccadea4369 10.0.0.128:6379@16379 master - 0 1666946953301 2 connected 5461-10922
208f0bbc3d6f2297f8700f2a6c024da86a918938 10.0.0.132:6379@16379 myself,master - 0 1666946953000 1 connected 0-5460
ed9353a0af64b76a0681a152db87f430f8ac8849 10.0.0.128:6380@16380 slave 208f0bbc3d6f2297f8700f2a6c024da86a918938 0 1666946954309 1 connected
ca80dc10f4581d0e051a9e23aeffa8426cc32166 10.0.0.132:6380@16380 slave 8845e2d632180388c415959bfe85b50eee7f3b10 0 1666946952292 3 connected
0ba024bd1b2bd91d16c38f90e5d03b75f73b393f 10.0.0.129:6380@16380 slave 9cfa2df2729dc59ceb1a8cc087643dccadea4369 0 1666946953000 2 connected
8845e2d632180388c415959bfe85b50eee7f3b10 10.0.0.129:6379@16379 master - 0 1666946953000 3 connected 10923-16383

集群故障转移

1.主节点master节点坏了,则从节点提升为主节点,数据同步过去

2.主节点master修复后,成为新主的从节点

3.集群内的数据不受影响,任何一个节点都以slot的方式来存储,分片存储,3个master节点都可以进行读写

测试写入数据key及python写入测试

直接-c自动判断重定向到对应node上面的slot槽位,以集群的模式启动redis集群

python,java等程序估计都是要写入redis所有节点的IP地址+端口号的

redis-cli -a 123456 --cluster create 10.0.0.132:6379 10.0.0.128:6379 10.0.0.129:6379 10.0.0.132:6380 10.0.0.128:6380 10.0.0.129:6380 --cluster-replicas 1

##redis cluster方式连接
redis-cli -a 123456 -c 
[root@rocky data]#redis-cli -a 123456 -c 
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.

127.0.0.1:6379> set key1 v1
-> Redirected to slot [9189] located at 10.0.0.128:6379
OK

10.0.0.128:6379> get key1
"v1"

python:添加一万个key-value值到里面,添加一万条记录

注意:程序连接的地址一定是IP地址+端口,所有节点包括主从节点都要写上

#python3环境,以及cluster链接模块
yum -y install python3

pip3 install redis-py-cluster

##跑一下python添加数据脚本
#!/usr/bin/env python3
from rediscluster  import RedisCluster

if __name__ == '__main__':

    startup_nodes = [
        {"host":"10.0.0.132", "port":6379},
        {"host":"10.0.0.128", "port":6379},
        {"host":"10.0.0.129", "port":6379},
        {"host":"10.0.0.132", "port":6380},
        {"host":"10.0.0.128", "port":6380},
        {"host":"10.0.0.129", "port":6380}]
    try:
        redis_conn= RedisCluster(startup_nodes=startup_nodes,password='123456', decode_responses=True)
    except Exception as e:
        print(e)

    for i in range(0, 10000):
        redis_conn.set('key'+str(i),'value'+str(i))
        print('key'+str(i)+':',redis_conn.get('key'+str(i)))

-c集群模式写入
[root@rocky script]#redis-cli -a 123456 -c
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
127.0.0.1:6379> get key111
-> Redirected to slot [13680] located at 10.0.0.129:6379
"value111"

redis-cluster使用的局限性

集群的优点:

1.多个master读写,解决了单点故障,写性能得到比较好的提升

2.有从节点作为备份

集群的缺点:

1.成本高,机器多,维护比较麻烦

2.不支持读写分离,因为从节点不提供读功能,从节点只能同步主节点的数据

3.客户端链接的时候,还是得找一下槽位,导致响应可能会慢

4.有些命令例如mget,keys *等遍历所有库的,不方便

5.集群cluster会和sentinel哨兵冲突,建议选择一个即可,一般来说哨兵+主从足以,一个项目一套,不要混用redis即可

像磐石系统:直接使用的哨兵来实现,1主2从,加上哨兵做高可用即可

2. LVS常用模型工作原理,及实现。

3. LVS的负载策略有哪些,各应用在什么场景,通过LVS DR任意实现1-2种场景。

4. web http协议通信过程,相关技术术语总结。

img

技术术语

http:应用层协议,端口80

web前端开发语言:h5,js,css

mime.type:可识别的后缀,文件格式,Nginx内的mime.type就是可识别执行的后缀

URL:服务器资源路径位置

5. 总结网络IO模型和nginx架构。

nginx的基本功能

nginx只能处理静态文件图片等,想要实现动态资源可以依赖于PHP等程序来实现

  • 静态的web资源服务器html,图片,js,css,txt等静态资源
  • http/https协议的反向代理,代理后端业务服务器的业务端口
  • 结合FastCGI/uWSGI/SCGI等协议反向代理动态资源请求
  • tcp/udp协议的请求转发(反向代理)
  • imap4/pop3协议的反向代理,邮件服务器反向代理

nginx的并发性很高很强,master管理worker进程,worker进程由master进程和nginx.conf配置文件控制个数(worker_connection等,worker_processes)

Apache的工作进程:多个子进程,多个线程来处理用户请求

[root@master conf]#pstree -p | grep httpd
           |-httpd(1112)-+-httpd(37066)
           |             |-httpd(37067)
           |             |-httpd(37068)
           |             |-httpd(37069)
           |             `-httpd(37102)

[root@master conf]#pstree -p | grep nginx
           |-nginx(51078)-+-nginx(53849)
           |              `-nginx(53850)

nginx的工作进程

一个master进程,多个worker子进程,master进程分发工作任务到子进程,分配一个worker进程处理这个访问请求

master进程负责接受请求,worker进程负责实际的处理工作

img

6. nginx总结核心配置和优化。

nginx 有多种模块

核心模块:是 Nginx 服务器正常运行必不可少的模块,提供错误日志记录(access.log,errer.log) 、配置文件解析 、事件驱动机制 、进程管理等核心功能

标准HTTP模块:提供 HTTP 协议解析相关的功能,比如: 虚拟主机(server) 、 网页编码设置 、 HTTP响应头设置 (header)等等

可选HTTP模块:主要用于扩展标准的 HTTP 功能,让 Nginx 能处理一些特殊的服务,比如: Flash多媒体传输 、解析 GeoIP 请求、 网络传输压缩 、 安全协议 SSL 支持(SSL模块)、压缩gzip等

邮件服务模块:主要用于支持 Nginx 的 邮件服务 ,包括对 POP3 协议、 IMAP 协议和 SMTP协议的支持

Stream服务模块: 实现反向代理功能,包括TCP协议代理第三方模块:是为了扩展 Nginx 服务器应用,完成开发者自定义功能,比如: Json 支持、 Lua 支持等(upstream模块),可代理传统的http服务,代理TCP服务,代理TCP端口

img

Nginx核心配置

server模块

依赖于http_core核心模块

核心站点配置模块,可以配置多个站点

在主nginx.conf下添加两个别的站点的配置文件,可以写为include

http{

error_log /apps/nginx/logs/error.log; ##定义错误日志位置

server {
        listen       8081; ##监听主机端口
        server_name  localhost; ##配置域名,类似IIS的配置监听,支持通配符,比如*.ctfmall.com

        #charset koi8-r;

        #access_log  logs/host.access.log  main;

        location / {
            root   html; ##相对路径,指的是/apps/nginx下面的
            index  index.html index.htm;
        }

        #error_page  404              /404.html;

        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html; ##服务端的配置,报错页面
        location = /50x.html {
            root   html;
        }
}

include /apps/nginx/conf/conf.d/*.conf; ##要写通配符,*.conf,include后面是文件

}

location模块的写法

location模块写法
##location匹配规则写法

#语法规则:
location [ = | ~ | ~* | ^~ ] uri { ... }
= #用于标准uri前,需要请求字串与uri精确匹配,大小写敏感,如果匹配成功就停止向下匹配并立即处理请求
^~ #用于标准uri前,表示包含正则表达式,并且匹配以指定的正则表达式开头,对uri的最左边部分做匹配检查,不区分字符大小写
~ #用于标准uri前,表示包含正则表达式,并且区分大小写
~* #用于标准uri前,表示包含正则表达式,并且不区分大写不带符号 #匹配起始于此uri的所有的uri
\ #用于标准uri前,表示包含正则表达式并且转义字符。可以将 . * ?等转义为普通符号

#匹配优先级从高到低:
=, ^~, ~/~*, 不带符号

=:精确匹配,必须是/about,区分大小写
location = /about {
                alias /opt/html/about; 
        }

^~:模糊匹配,是以/about开头即可,不区分大小写
location ^~ /about {
                alias /opt/html/about; 
        }

~:包含,只要包含about即可,区分大小写
location ^~ /about {
                alias /opt/html/about; 
        }

##其实最终表现为还是区分大小写,使用~的情况最多

~*:包含,只要包含about即可,不区分大小写
location ~* /about {
                alias /opt/html/about; 
        }

#以这些后缀结尾的,\.表示单一个字符,动静分离,不同服务器处理不同的请求
location ~* \.(gif|jpg|jpeg)$  {
                alias /opt/html/about; 
        }

location ~* \.(gif|jpg|jpeg)$  {
                root /apps/nginx/static;
              	index index.html;
        }

nginx的优化

##优化
开启gzip压缩页面
防盗链优化referer,限制前一个域名跳转
做反向代理,proxy_pass开启cache缓存,访问到Nginx的缓存上,nginx前端的图片做缓存
upstream开启ip_hash或者基于URL访问--->解决用户客户端session不统一,会话老是负载到别的机器上的问题
SSL加密传输访问,可实现443转发到8080端口,80强制跳转443,使用rewrite模块
防盗链:只限于从某个链接跳转过来,其他都拒绝

##进程
加大worker进程数,worker_processes,建议和CPU核数一致(VCPU)
加大连接数connections,并发
worker进程数*connections=最大并发

##日志
XFF传到日志中,获取客户端的真实IP,后端WEB服务器,多级代理透传,实现获取真实IP

7. 使用脚本完成一键编译安装nginx任意版本。

修改nginx_file的版本号,从网上直接下载nginx包

1.获取nginx包

2.解压到/usr/local/src下

3.创建user和group

4.安装依赖的开发包

5.对应nginx目录编译安装,指定路径,用户和组,需要编译安装的模块

6.指定nginx的sbin路径二进制执行程序环境变量

7.写service文件

8.开机自启动

#!/bin/bash
#
#********************************************************************
#Author:			wangxiaochun
#QQ: 				29308620
#Date: 				2020-12-01
#FileName:			install_nginx.sh
#URL: 				http://www.wangxiaochun.com
#Description:		The test script
#Copyright (C): 	2021 All rights reserved
#********************************************************************
SRC_DIR=/usr/local/src
NGINX_URL=http://nginx.org/download/
NGINX_FILE=nginx-1.20.2
#NGINX_FILE=nginx-1.18.0
TAR=.tar.gz
NGINX_INSTALL_DIR=/apps/nginx
CPUS=`lscpu |awk '/^CPU\(s\)/{print $2}'`
. /etc/os-release

color () {
    RES_COL=60
    MOVE_TO_COL="echo -en \\033[${RES_COL}G"
    SETCOLOR_SUCCESS="echo -en \\033[1;32m"
    SETCOLOR_FAILURE="echo -en \\033[1;31m"
    SETCOLOR_WARNING="echo -en \\033[1;33m"
    SETCOLOR_NORMAL="echo -en \E[0m"
    echo -n "$1" && $MOVE_TO_COL
    echo -n "["
    if [ $2 = "success" -o $2 = "0" ] ;then
        ${SETCOLOR_SUCCESS}
        echo -n $"  OK  "    
    elif [ $2 = "failure" -o $2 = "1"  ] ;then 
        ${SETCOLOR_FAILURE}
        echo -n $"FAILED"
    else
        ${SETCOLOR_WARNING}
        echo -n $"WARNING"
    fi
    ${SETCOLOR_NORMAL}
    echo -n "]"
    echo 
}

os_type () {
   awk -F'[ "]' '/^NAME/{print $2}' /etc/os-release
}

os_version () {
   awk -F'"' '/^VERSION_ID/{print $2}' /etc/os-release
}

check () {
    [ -e ${NGINX_INSTALL_DIR} ] && { color "nginx 已安装,请卸载后再安装" 1; exit; }
    cd  ${SRC_DIR}
    if [  -e ${NGINX_FILE}${TAR} ];then
        color "相关文件已准备好" 0
    else
        color '开始下载 nginx 源码包' 0
        wget ${NGINX_URL}${NGINX_FILE}${TAR} 
        [ $? -ne 0 ] && { color "下载 ${NGINX_FILE}${TAR}文件失败" 1; exit; } 
    fi
} 

install () {
    color "开始安装 nginx" 0
    if id nginx  &> /dev/null;then
        color "nginx 用户已存在" 1 
    else
      	##-r,创建一样的组
        useradd -s /sbin/nologin -r  nginx
        color "创建 nginx 用户" 0 
    fi
    color "开始安装 nginx 依赖包" 0
    if [ $ID == "centos" ] ;then
	    if [[ $VERSION_ID =~ ^7 ]];then
          	##c++编译包,pcre开发包,openssl,SSL证书编译包,zlib解压包
            yum -y -q  install make gcc pcre-devel openssl-devel zlib-devel perl-ExtUtils-Embed
		elif [[ $VERSION_ID =~ ^8 ]];then
            yum -y -q install make gcc-c++ libtool pcre pcre-devel zlib zlib-devel openssl openssl-devel perl-ExtUtils-Embed 
		else 
            color '不支持此系统!'  1
            exit
        fi
    elif [ $ID == "rocky"  ];then
	    yum -y -q install make gcc-c++ libtool pcre pcre-devel zlib zlib-devel openssl openssl-devel perl-ExtUtils-Embed 
	else
        apt update &> /dev/null
        apt -y install make gcc libpcre3 libpcre3-dev openssl libssl-dev zlib1g-dev &> /dev/null
    fi

    cd $SRC_DIR
    tar xf ${NGINX_FILE}${TAR}
    NGINX_DIR=`echo ${NGINX_FILE}${TAR}| sed -nr 's/^(.*[0-9]).*/\1/p'`
    cd ${NGINX_DIR}
    	##指定user,group,SSL加密模块,XFF模块,压缩模块,都是HTTP扩展模块,默认是安装标准模块,安装负载均衡upstream模块
    ./configure --prefix=${NGINX_INSTALL_DIR} --user=nginx --group=nginx --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-http_stub_status_module --with-http_gzip_static_module --with-pcre --with-stream --with-stream_ssl_module --with-stream_realip_module 
  	##编译安装
    make -j $CPUS && make install 
    [ $? -eq 0 ] && color "nginx 编译安装成功" 0 ||  { color "nginx 编译安装失败,退出!" 1 ;exit; }

    ##nginx发送到环境变量上
    echo "PATH=${NGINX_INSTALL_DIR}/sbin:${PATH}" > /etc/profile.d/nginx.sh
  	source /etc/profile.d/nginx.sh

    cat > /lib/systemd/system/nginx.service <<EOF
[Unit]
Description=The nginx HTTP and reverse proxy server
After=network.target remote-fs.target nss-lookup.target

[Service]
Type=forking
PIDFile=${NGINX_INSTALL_DIR}/logs/nginx.pid
ExecStartPre=/bin/rm -f ${NGINX_INSTALL_DIR}/logs/nginx.pid
ExecStartPre=${NGINX_INSTALL_DIR}/sbin/nginx -t
ExecStart=${NGINX_INSTALL_DIR}/sbin/nginx
ExecReload=/bin/kill -s HUP \$MAINPID
KillSignal=SIGQUIT
TimeoutStopSec=5
KillMode=process
PrivateTmp=true
LimitNOFILE=100000

[Install]
WantedBy=multi-user.target
EOF
    systemctl daemon-reload
    systemctl enable --now nginx &> /dev/null 
    systemctl is-active nginx &> /dev/null ||  { color "nginx 启动失败,退出!" 1 ; exit; }
    color "nginx 安装完成" 0
}

check
install

8. 任意编译一个第3方nginx模块,并使用。

通过echo打印变量出来,直接调用echo的模块方法,变量列表:https://nginx.org/en/docs/varindex.html

wget https://github.com/openresty/echo-nginx-module/archive/refs/tags/v0.63.tar.gz
tar xf xxx.tar.gz

##进入到1.20版本重新编译
cd /usr/local/src/nginx-1.20.2
./configure --prefix=/apps/nginx --user=nginx --group=nginx --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-http_stub_status_module --with-http_gzip_static_module --with-pcre --with-stream --with-stream_ssl_module --with-stream_realip_module --add-module=/usr/local/src/echo-nginx-module-0.63

make -j 2 && make install 

重启nginx服务
nginx -s stop
nginx
或者
systemctl restart nginx

##匹配echo字符串
location /echo {
    	echo "this is a echo test";
    	echo $remote_addr;
}

##远端服务器测试,获取了客户端的地址
curl -s http://pc.catyer.cn/echo
this is a echo test
10.0.0.128

标签:10.0,6379,redis,cluster,week8,master,sentinel
From: https://www.cnblogs.com/catyer/p/16971942.html

相关文章