1、高可用群集
(1)单台服务器
(2)keepalived
一个合格的群集应该具备的特点:
- 负载均衡:用于提高群集的性能(LVS Nqinx HAProxy SLB F5)
- 健康检查(探针):针对于调度器和节点服务器(KeepalivedHeartbeat)
- 故障转移:通过VIP漂移实现主备切换
健康检查(探针)常用的工作方式:
发送心跳消息:vrrp报文、ping/pong
TCP端口检查:向目标主机的 IP:PORT 发起TCP连接请求,如果TCP连接三次握手成功则认为健康检査正常,否则认为健康检査异常
HTTP URL检查:向目标主机的URL路径(比如http://IP:PORT/URL路径)发起 HTTP GET 请求方法,如果响应消息的状态码为 2xx 或 3xx 则认为健康检查正常;如果响应消息的状态码为 4xx 或 5xx 则认为健康检查异常。
(3)Keepalived实现原理
2、部署keepalived
Keepalived体系主要模块及其作用:
keepalived体系架构中主要有三个模块,分别是core、check和vrrp。
core模块:为keepalived的核心,负责主进程的启动、维护及全局配置文件的加载和解析。
vrrp模块:是来实现VRRP协议的。(调度器之间的健康检查和主备切换)
check模块:负责健康检查,常见的方式有端口检查及URL检查。(节点服务器的健康检查)
(1)准备工具
两台虚拟机:
20.0.0.10(主调度器)
20.0.0.20(备调度器)
(2)系统初始化
(3)更新在线源仓库并安装keepalived
本地源keepalived版本较低,可通过在线升级在线源进行升级
20.0.0.10
[root@zx1 ~]# cd /mnt/Packages/
[root@zx1 Packages]# ls | grep keepalived
keepalived-1.3.5-19.el7.x86_64.rpm
[root@zx1 Packages]# cd /etc/yum.repos.d/
[root@zx1 yum.repos.d]# ls
local.repo repo.bak
[root@zx1 yum.repos.d]# mv repo.bak/* ./
[root@zx1 yum.repos.d]# ls
CentOS-Base.repo CentOS-fasttrack.repo CentOS-Vault.repo repo.bak
CentOS-CR.repo CentOS-Media.repo CentOS-x86_64-kernel.repo
CentOS-Debuginfo.repo CentOS-Sources.repo local.repo
[root@zx1 yum.repos.d]# mv local.repo repo.bak
[root@zx1 yum.repos.d]# yum -y install epel-release
[root@zx1 yum.repos.d]# yum install -y keepalived
20.0.0.20
[root@zx2 ~]# cd /mnt/Packages/
[root@zx2 Packages]# ls | grep keepalived
keepalived-1.3.5-19.el7.x86_64.rpm
[root@zx2 Packages]# cd /etc/yum.repos.d/
[root@zx2 yum.repos.d]# ls
local.repo repo.bar
[root@zx2 yum.repos.d]# mv repo.bar/* ./
[root@zx2 yum.repos.d]# ls
CentOS-Base.repo CentOS-fasttrack.repo CentOS-Vault.repo repo.bar
CentOS-CR.repo CentOS-Media.repo CentOS-x86_64-kernel.repo
CentOS-Debuginfo.repo CentOS-Sources.repo local.repo
[root@zx1 yum.repos.d]# mv local.repo repo.bak
[root@zx2 yum.repos.d]# yum -y install epel-release
[root@zx2 yum.repos.d]# yum install -y keepalived
(4)配置主调度器(20.0.0.10)
[root@zx1 yum.repos.d]# cd /etc/keepalived/
[root@zx1 keepalived]# cp keepalived.conf keepalived.conf.bak
[root@zx1 keepalived]# ls
keepalived.conf keepalived.conf.bak
[root@zx1 keepalived]# vim keepalived.conf
[root@zx1 keepalived]#
1 ! Configuration File for keepalived
2
3 global_defs {
4 notification_email {
5 [email protected]
6 [email protected]
7 [email protected]
8 }
9 notification_email_from [email protected]
10 smtp_server 127.0.0.1 ##修改
11 smtp_connect_timeout 30
12 router_id LVS_01 ##修改路由器id号
13 #vrrp_skip_check_adv_addr
14 #vrrp_strict
15 #vrrp_garp_interval 0
16 #vrrp_gna_interval 0 ##四行全部注释
17 }
18
19 vrrp_instance VI_1 {
20 state MASTER
21 interface ens33 ##修改为网卡名称
22 virtual_router_id 51
23 priority 100
24 advert_int 1
25 authentication {
26 auth_type PASS
27 auth_pass 1111
28 }
29 virtual_ipaddress {
30 20.0.0.100 ##设置vip号
31 }
32 } ##后面的内容暂时不要,按esc进入命令模式,按1000dd全删了(#dd:表示删除包括光标所在行的#行内容)
~
(5)配置备调度器(20.0.0.20)
[root@zx2 yum.repos.d]# cd /etc/keepalived/
[root@zx2 keepalived]# ls
keepalived.conf
[root@zx2 keepalived]# cp keepalived.conf keepalived.conf.bak
[root@zx2 keepalived]# vim keepalived.conf
[root@zx2 keepalived]#
1 ! Configuration File for keepalived
2
3 global_defs {
4 notification_email {
5 [email protected]
6 [email protected]
7 [email protected]
8 }
9 notification_email_from [email protected]
10 smtp_server 127.0.0.1 ##修改
11 smtp_connect_timeout 30
12 router_id LVS_02 ##修改路由器id
13 #vrrp_skip_check_adv_addr
14 #vrrp_strict
15 #vrrp_garp_interval 0
16 #vrrp_gna_interval 0 ##四行注释
17 }
18
19 vrrp_instance VI_1 {
20 state BACKUP ##修改成备调度器
21 interface ens33 ##修改为网卡
22 virtual_router_id 51
23 priority 90 ##修改优先级
24 advert_int 1
25 authentication {
26 auth_type PASS
27 auth_pass 1111
28 }
29 virtual_ipaddress {
30 20.0.0.100 ##设置vip
31 }
32 } ##下面内容删除
~
(6)启动主备keepalived
systemctl start keepalived
(7)验证
主调度器
[root@zx1 keepalived]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:53:65:31 brd ff:ff:ff:ff:ff:ff
inet 20.0.0.10/24 brd 20.0.0.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet 20.0.0.100/32 scope global ens33 ##成功
valid_lft forever preferred_lft forever
inet6 fe80::947:89f3:4c57:3a9e/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 52:54:00:8f:c7:54 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
link/ether 52:54:00:8f:c7:54 brd ff:ff:ff:ff:ff:ff
[root@zx1 keepalived]#
备调度器
[root@zx2 keepalived]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:db:f6:a6 brd ff:ff:ff:ff:ff:ff
inet 20.0.0.20/24 brd 20.0.0.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::528e:8bf:1ac4:282e/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 52:54:00:ad:f5:42 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
link/ether 52:54:00:ad:f5:42 brd ff:ff:ff:ff:ff:ff
[root@zx2 keepalived]#
模拟主服务器故障,主服务器vip消失
[root@zx1 keepalived]# systemctl stop keepalived.service
[root@zx1 keepalived]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:53:65:31 brd ff:ff:ff:ff:ff:ff
inet 20.0.0.10/24 brd 20.0.0.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::947:89f3:4c57:3a9e/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 52:54:00:8f:c7:54 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
link/ether 52:54:00:8f:c7:54 brd ff:ff:ff:ff:ff:ff
[root@zx1 keepalived]#
备服务器vip出现
[root@zx2 keepalived]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:db:f6:a6 brd ff:ff:ff:ff:ff:ff
inet 20.0.0.20/24 brd 20.0.0.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet 20.0.0.100/32 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::528e:8bf:1ac4:282e/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 52:54:00:ad:f5:42 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
link/ether 52:54:00:ad:f5:42 brd ff:ff:ff:ff:ff:ff
[root@zx2 keepalived]#
3、常问题目
1.Keepalived通过什么判断哪台主机为主服务器,通过什么方式配置浮动IP?
答案:
Keepalived首先做初始化先检查state状态,master为主服务器,backup为备服务器。
然后再对比所有服务器的priority,谁的优先级高谁是最终的主服务器。
优先级高的服务器会通过ip命令为自己的电脑配置一个提前定义好的浮动IP地址。
keepalived的抢占与非抢占模式:
抢占模式即MASTER从故障中恢复后,会将VIP从BACKUP节点中抢占过来。非抢占模式即MASTER恢复后不抢占BACKUP升级为MASTER后的VIP
非抢占式俩节点state必须为bakcup,且必须配置nopreempt。
注意:这样配置后,我们要注意启动服务的顺序,优先启动的获取master权限,与优先级没有关系了。
4、非抢占模式配置
基于上面的两台虚拟机配置
(1)修改主备服务器配置
主(20.0.0.10)
[root@zx1 keepalived]# vim keepalived.conf
[root@zx1 keepalived]# systemctl stop keepalived.service
[root@zx1 keepalived]#
备(20.0.0.20)
[root@zx2 keepalived]# vim keepalived.conf
[root@zx2 keepalived]# systemctl stop keepalived.service
[root@zx2 keepalived]#
(2)验证
非抢占模式下,不看优先级顺序,以启动顺序为依据,先开启哪台,vip就在哪台
关闭两台的keepalived服务
先开启20.0.0.10,再开启20.0.0.20,vip会出现在20.0.0.10上
关闭20.0.0.10上的keepalived服务后,vip就转移到20.0.0.20上
就算重启20.0.0.10的keepalived服务,vip还是在20.0.0.20上
想要vip回到20.0.0.10上,只能在20.0.0.20上重启keepalived
5、LVS+KeepAlived高可用负载均衡集群的部署
6、Nginx+KeepAlived高可用负载均衡集群的部署
7、Keepealived脑裂现象
(1)产生脑裂的原因
Master一直发送心跳消息给backup主机,如果中间的链路突然断掉,backup主机将无法收到master主机发送过来的心跳消息(也就是vrrp报文),backup这时候会立即抢占master的工作,但其实这时候的master是正常工作的,此时就会出现脑裂的现象。
(2)解决方法
关闭主服务器或备服务器其中一个的Keepealived服务
(3)如何预防
1.如果是系统防火墙导致,则关闭防火墙或添加防火墙规则放通VRRP组播地址(224.0.0.18)的传输
2.如果是主备服务器之间的通信链路中断导致,则可以在主备服务器之间添加双链路通信
3.在主服务器使用脚本定时判断与备服务器通信链路是否中断,如果判断是主备服务器之间的链接中断则自行关闭主服务器上的keepalived服务
4.利用第三方应用或监控系统检测是否发生了脑裂故障现象,如果确认发生了脑裂故障则通过第三方应用或监控系统来关闭主服务器或备服务器其中一个的keepalived服务
标签:forever,负载,LVS,KeepAlived,keepalived,00,lft,ff,root From: https://blog.csdn.net/zx52306/article/details/139830275