keepAlived
1 系统可用性
- A = MTBF / (MTBF + MTTR)
99.95%: (60*24*30)*(1-0.9995)=21.6分钟 # 一般按一个月停机时间统计
- 指标:99.9%,99.99%,99.999%,99.9999%
2 实现高可用
解决方案:建立冗余机制
active/passive 主/备
active/active 双主
3 VRRP
全名:Virtual Router Redundancy Protocol
虚拟路由器冗余协议,解决静态网关单点风险
物理层:路由器、三层交换机
软件层:keepalived
4 keepalived介绍
官网与下载地址
5 keepalived环境准备
各节点时间必须同步:ntp、chrony
关闭防火墙及SELinux
实验准备
准备五台机器
# 两台keepalived端 k1.xier.org k2.xier.org # 一台客户端 client.xier.org # 两台web服务器 web1.xier.org web2.xier.org
keepalived安装与使用
keepalived所有节点(yum安装)
底层是vrrp协议:提供一个虚拟IP
yum install -y keepalived vim /etc/hosts 10.0.0.100 www.xier.org # 10.0.0.100 作为vip cp /etc/keepalived/keepalived.conf{,.bak} vim /etc/keepalived/keepalived.conf 第一板块为全局配置 第二板块为vrrp配置 第三模块为keepalived与lvs结合的配置 # 第一板块 global_defs { notification_email { [email protected] # keepalived 发生故障切换时邮件发送的目标邮箱,可以按行区分写多个 [email protected] } notification_email_from [email protected] # 发邮件的地址 smtp_server 127.0.0.1 # 邮件服务器地址 smtp_connect_timeout 30 # 邮件服务器连接超时时间 router_id ka1.xier.org # 每个keepalived主机唯一标识,建议使用当前主机名,但多节点重名不影响 vrrp_skip_check_adv_addr # 不写默认是全检查,对所有通告报文都检查,会比较消化性能。启用此配置后,如果收到的通告报文和上一个报文是同一个路由器,则跳过检查 vrrp_strict # 严格遵守vrrp协议,启动此项后以下状况将无法启动服务:1.无VIP地址,2.配置看单播邻居,3.在vrrp版本2中有IPv6地址,开启此项并且没有配置vrrp_iptables时会开启iptables防火墙规则,默认导致VIP无法访问,不建议加此项配置(生产建议不要加) vrrp_garp_interval 0 # gratuitous ARP messages 报文发送延迟,0表示不延迟 vrrp_gna_interval 0 # unsolicited NA messages (不请自来)消息发送延迟 vrrp_mcast_group4 224.0.0.18 # 指定组播IP地址范围:224.0.0.0到239.255.255.255,默认值:224.0.0.18。如果有多个keepalived集群,则使用不同的组播范围,避免出现互相干扰 vrrp_iptables # 此项和vrrp_strict同时开启时,则不会添加防火墙规则(iptables -vnL查看),如果无配置vrrp_strict项,则无需启用此配置 } # 第二板块 vrrp_instance web { state MASTER|BACKUP # 当前节点在此虚拟机路由器上的初始状态,默认为MASTER或者BACKUP interface eth0 # 绑定为当前虚拟路由器使用的物理接口,如:eth0,bond0,br0,可以和VIP不在一个网卡 virtual_router_id 51 # 每个虚拟路由器唯一标识,范围:0-255,每个虚拟路由器此值必须唯一,否则服务无法启动,同属一个虚拟路由器的多个keepalived节点必须相同,务必确认在同一网络中此值必须唯一 priority 100 # 当前物理节点在此虚拟路由器的优先级,范围:1-254,每个keepalived主机节点此值不同 advert_int 1 # vrrp通告的时间间隔,检查优先级,1秒检查一次,默认1s authentication { # 认证机制 auth_type PASS # AH为IPSEC认证(不推荐),PASS为简单密码(建议使用) auth_pass 1111 # 预共享密钥,仅前8为有效,同一个虚拟路由器的多个keepalived节点必须一样 } virtual_ipaddress { # 虚拟IP,生产环境可能指定上百个IP地址 192.168.200.16 # 指定VIP,不指定网卡,默认为eth0,注意:不知道/prefix,默认为/32 192.168.200.17/24 dev eth1 # 指定VIP网卡,建议和interface指令指定的网卡不在同一个网卡 192.168.200.18/24 dev eth2 label eth2:1 # 指定VIP的网卡label } } # 第三板块 # virtual_server 192.168.200.100 443 略 ....
注意:centos7上有bug,可能有下面情况出现
systemctl restart keepalived # 新配置可能无法生效 systemctl stop keepalived; systemctl start keepalived
client查看vip通告
tcpdump -i eth0 -nn host 224.0.0.18
keepalived编译安装
=================================================================所有keepalived节点========================================================================= yum install gcc curl openssl_devel libnl3-devel net-snmp-devel wget https://keepalived.org/software/keepalived-2.2.2.tar.gz tar xvf keepalived-2.2.2.tar.gz -C /usr/local/src cd /usr/local/src/keepalived-2.2.2 # 选项 --disable-fwmark 可用于禁用iptables规则,可防止VIP无法访问,无此选项默认会启用 iptables 规则 ./configure --prefix=/apps/keepalived make && make install /apps/keepalived/sbin/keepalived -v mkdir /etc/keepalived cp /apps/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf =================================================================ka1========================================================================= vim /etc/keepalived/keepalived.conf global_defs { notification_email { [email protected] [email protected] } notification_email_from [email protected] smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id ka1.xier.org vrrp_skip_check_adv_addr vrrp_garp_interval 0 vrrp_gna_interval 0 } vrrp_instance web { state MASTER interface eth0 virtual_router_id 66 priority 100 advert_int 1 authentication { # 认证机制 auth_type PASS # AH为IPSEC认证(不推荐),PASS为简单密码(建议使用) auth_pass 1111 # 预共享密钥,仅前8为有效,同一个虚拟路由器的多个keepalived节点必须一样 } virtual_ipaddress { 10.0.0.18/24 dev eth0 label eth0:1 } } systemctl start keepalived scp -r /etc/keepalived 10.0.0.18:/etc/ =================================================================ka2========================================================================= vim /etc/keepalived/keepalived.conf global_defs { notification_email { [email protected] [email protected] } notification_email_from [email protected] smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id ka1.xier.org vrrp_skip_check_adv_addr vrrp_garp_interval 0 vrrp_gna_interval 0 } vrrp_instance web { state BACKUP interface eth0 virtual_router_id 66 priority 70 advert_int 1 authentication { # 认证机制 auth_type PASS # AH为IPSEC认证(不推荐),PASS为简单密码(建议使用) auth_pass 1111 # 预共享密钥,仅前8为有效,同一个虚拟路由器的多个keepalived节点必须一样 } virtual_ipaddress { 10.0.0.18/24 dev eth0 label eth0:1 } }
抢占模式和非抢占模式
- 默认为抢占模式,即当高优先级的主机恢复在线后,会抢占低优先级的master角色,造成网络波动,建议设置为非抢占模式nopreempt,即高优先级主机恢复后,并不会抢占优先级主机的master角色
- 此外原主机宕机迁移至新主机后续也发生宕机时,会将VIP迁移回原主机,因此生产建议不要使用非抢占模式
非抢占模式
注意:要关闭VIP抢占,必须将各keepalived服务器state配置为BACKUP
效果就是当主VIP宕机时VIP飘移过后,重启主VIP也不会将VIP夺回来
注意:需要各keepalived服务器state为BACKUP
# k1主机配置 vrrp_instance web { state BACKUP # 都为BACKUP interface eth0 virtual_router_id 51 priority 100 # 优先级高 advert_int 1 nopreempt # 添加此行,都为nopreempt authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.0.0.100/24 dev eth0 label eth0:1 } } # k2主机配置 vrrp_instance web { state BACKUP interface eth0 virtual_router_id 51 priority 80 advert_int 1 nopreempt # 添加此行,都为nopreempt authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.0.0.100/24 dev eth0 label eth0:1 } }
抢占式延迟模式
需要先关闭vrrp严格模式:不要启用vrrp_strict
抢占延迟模式,即优先级高的主机恢复后,不会立即抢回VIP,而是延迟一段时间(默认300s)再抢回VIP
preempt_delay # 指定抢占延迟时间为#s,默认延迟300s
注意:需要各keepalived服务器state为BACKUP
范例
# k1主机配置 vrrp_instance web { state BACKUP # 都为BACKUP interface eth0 virtual_router_id 51 priority 100 # 优先级高 advert_int 1 preempt_delay 60 # 抢占延迟模式,默认延迟300s authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.0.0.100/24 dev eth0 label eth0:1 } } # k2主机配置 vrrp_instance web { state BACKUP interface eth0 virtual_router_id 51 priority 80 advert_int 1 preempt_delay 60 # 抢占延迟模式,默认延迟300s authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.0.0.100/24 dev eth0 label eth0:1 } }
VIP单播配置
默认keepalived主机之间利用多播相互通告消息,会造成网络拥塞,可以替换成单播,减少网络流量
作用:可以减少一定的广播流量,因为不会跨交换机而导致广播到处蔓延
注意:启动单播,不能启用 vrrp_strict,且vrrp_mcast_group4多播设置会失效
# 在所有节点vrrp_instance语句中设置对方主机的IP,建议设置为专用于对应心跳线网络的地址,而非使用业务网络 unicast_src_ip <IPADDR> # 指定发送单播的源IP unicast_peer { <IPADDR> # 指定接受单播的对方目标主机IP ....... }
范例(注意在第二模板位置)
# k1主机配置 global_defs { notification_email { [email protected] [email protected] } notification_email_from [email protected] smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id ka1.xier.org vrrp_skip_check_adv_addr vrrp_garp_interval 0 vrrp_gna_interval 0 } vrrp_instance web { state BACKUP interface eth0 virtual_router_id 66 priority 100 advert_int 1 unicast_src_ip 10.0.0.8 # 指定发送单播的源IP unicast_peer { 10.0.0.18 # 指定接受单播的对方目标主机IP } virtual_ipaddress { 10.0.0.18/24 dev eth0 label eth0:1 } } # k2主机配置 global_defs { notification_email { [email protected] [email protected] } notification_email_from [email protected] smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id ka2.xier.org vrrp_skip_check_adv_addr vrrp_garp_interval 0 vrrp_gna_interval 0 } vrrp_instance web { state BACKUP interface eth0 virtual_router_id 66 priority 80 advert_int 1 unicast_src_ip 10.0.0.18 # 指定发送单播的源IP unicast_peer { 10.0.0.8 # 指定接受单播的对方目标主机IP } virtual_ipaddress { 10.0.0.18/24 dev eth0 label eth0:1 } }
验证
tcpdump -i eth0 -nn host 10.0.0.8 and 10.0.0.18
启用keepalived日志功能
打开配置文件(总共有0-7个日志等级)
cp /apps/keepalived/etc/sysconfig/keepalived /etc/sysconfig/ vim /apps/keepalived/etc/sysconfig/keepalived KEEPALIVED_OPTIONS="-D -S 6" # 编辑日志配置文件 vim /etc/rsyslog.conf local6.* /var/log/keepalived.log systemctl restart keepalived rsyslog tail -f /var/log/keepalived.log
keepalived双主双从配置
配置文件
k1主机配置
vim /etc/keepalived/keepalived.conf global_defs { notification_email { [email protected] [email protected] } notification_email_from [email protected] smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id ka1.xier.org vrrp_skip_check_adv_addr vrrp_garp_interval 0 vrrp_gna_interval 0 } vrrp_instance web1 { state MASTER interface eth0 virtual_router_id 66 priority 100 advert_int 1 virtual_ipaddress { 10.0.0.100...150/24 dev eth0 label eth0:0..150 } } vrrp_instance web2 { state BACKUP interface eth0 virtual_router_id 88 priority 80 advert_int 1 virtual_ipaddress { 10.0.0.151...200/24 dev eth0 label eth0:151...200 } }
k2主机配置
vim /etc/keepalived/keepalived.conf global_defs { notification_email { [email protected] [email protected] } notification_email_from [email protected] smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id ka1.xier.org vrrp_skip_check_adv_addr vrrp_garp_interval 0 vrrp_gna_interval 0 } vrrp_instance web1 { state BACKUP interface eth0 virtual_router_id 66 priority 80 advert_int 1 virtual_ipaddress { 10.0.0.100...150/24 dev eth0 label eth0:0..150 } } vrrp_instance web2 { state MASTER interface eth0 virtual_router_id 88 priority 100 advert_int 1 virtual_ipaddress { 10.0.0.151...200/24 dev eth0 label eth0:151...200 } }
重启服务查看状态(所有节点)
systemct restart keepalived hostname -I | tr ' ' '\n'
子配置文件配置
配置如下
k1
vim /etc/keepalived/keepalived.conf global_defs { notification_email { [email protected] [email protected] } notification_email_from [email protected] smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id ka1.xier.org vrrp_skip_check_adv_addr vrrp_garp_interval 0 vrrp_gna_interval 0 vrrp_mcast_group4 224.1.1.1 } include /etc/keepalived/conf.d/*.conf mkdir /etc/keepalived/conf.d/ vim /etc/keepalived/conf.d/web1.conf vrrp_instance web1 { state MASTER interface eth0 virtual_router_id 66 priority 100 advert_int 1 virtual_ipaddress { 10.0.0.100...150/24 dev eth0 label eth0:0..150 } } vim /etc/keepalived/conf.d/web2.conf vrrp_instance web2 { state BACKUP interface eth0 virtual_router_id 88 priority 80 advert_int 1 virtual_ipaddress { 10.0.0.151...200/24 dev eth0 label eth0:151...200 } }
k2
vim /etc/keepalived/keepalived.conf global_defs { notification_email { [email protected] [email protected] } notification_email_from [email protected] smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id ka1.xier.org vrrp_skip_check_adv_addr vrrp_garp_interval 0 vrrp_gna_interval 0 vrrp_mcast_group4 224.1.1.1 } include /etc/keepalived/conf.d/*.conf mkdir /etc/keepalived/conf.d/ vim /etc/keepalived/conf.d/web1.conf vrrp_instance web1 { state BACKUP interface eth0 virtual_router_id 66 priority 80 advert_int 1 virtual_ipaddress { 10.0.0.100...150/24 dev eth0 label eth0:0..150 } } vim /etc/keepalived/conf.d/web2.conf vrrp_instance web2 { state MASTER interface eth0 virtual_router_id 88 priority 100 advert_int 1 virtual_ipaddress { 10.0.0.151...200/24 dev eth0 label eth0:151...200 } }
重启服务再查看是否生效
systemctl restart keepalived hostname -I | tr ' ' '\n'
keepalived通知脚本配置
当keepalived的状态变化时,可以自动触发脚本的执行,比如:发邮件通知用户
默认以用户keepalived_script身份执行脚本,如果此用户不存在,以root执行脚本
可以用下面指令指定脚本执行用户的身份
global_defs { '''''' srcipt_user <user> '''''' }
通知脚本类型
-
当前节点成为主节点时触发的脚本
notify_master <STRING>|<QUOTED-STRING>
-
当前节点转为备节点时触发的脚本
notify_backup <STRING>|<QUOTED-STRING>
-
当前节点转为 "失败(keepalived故障)" 状态时触发的脚本
notify_fault <STRING>|<QUOTED-STRING>
-
通用格式的通知触发机制,一个脚本可完成以上三种状态的转换时的通知
notify <STRING>|<QUOTED-STRING>
-
当停止VRRP时触发的脚本
notify_stop <STRING>|<QUOTED-STRING>
脚本调用方法(邮件报警)
在vrrp_instance VI_1 语句块的末尾加下面行
notify_master "/etc/keepalived/notify.sh master" notify_backup "/etc/keepalived/notify.sh backup" notify_fault "/etc/keepalived/notify.sh fault"
范例
k1和k2
vim /etc/keepalived/keepalived.conf global_defs { notification_email { [email protected] [email protected] } notification_email_from [email protected] smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id ka1.xier.org vrrp_skip_check_adv_addr vrrp_garp_interval 0 vrrp_gna_interval 0 vrrp_mcast_group4 224.1.1.1 } include /etc/keepalived/conf.d/*.conf mkdir /etc/keepalived/conf.d/ vim /etc/keepalived/conf.d/web1.conf vrrp_instance web1 { state MASTER interface eth0 virtual_router_id 66 priority 100 advert_int 1 virtual_ipaddress { 10.0.0.100/24 dev eth0 label eth0:100 } notify_master "/etc/keepalived/notify.sh master" notify_backup "/etc/keepalived/notify.sh backup" notify_fault "/etc/keepalived/notify.sh fault" } vim /etc/keepalived/notify.sh #!/bin/bash contact='[email protected]' notify() { mailsubject="$(hostname) to be $1, vip floating" # 发送标题 mailbody="$(date +'%F %T'): vrrp transition, $(hostname) changed to be $1" # 发送内容 echo "$mailbody" | mail -s "$mailsubject" $contact } case $1 in master) notify master ;; backup) notify backup ;; fault) notify fault ;; *) echo "Usage: $(basename $0) {master|backup|fault}" exit 1 ;; esac chmod +x /etc/keepalived/notify.sh # 配置邮箱 dnf install -y mailx 发件人: SMTP服务器(QQ邮箱中-左上角设置-账户-开启SMTP服务器,smtp.qq.com ---> SMTP服务器端口:465 ---> SMTP HELO:qq.com ---> SMTP电邮:[email protected] ---> 安全链接:SSL/TLS ---> 勾选:SSL验证对端、SSL验证主机 ---> 认证:用户名和密码 ---> 用户名称:[email protected] --->授权密码:ljroytmuhlkjbgje vim /etc/mail.rc '''' set [email protected] set smtp=smtp.qq.com set [email protected] set smtp-auth-password=ljroytmuhlkjbgje systemctl restart keepalived
测试邮件是否能发送
/etc/keepalived/notify.sh master
keepalived+lvs高可用
k1和k2安装ipvs工具
yum install -y ipvsadm
IPVS相关配置
虚拟服务器配置结果
virtual_server IP port { ''' real_server { ''' } real_server { ''' } ''' }
virtual server (虚拟服务器) 的定义格式
virtual_server IP port # 定义虚拟主机IP地址及其端口 virtual_server fwmark int # ipvs的防火墙打标,实现基于防火墙的负载均衡集群 virtual_server group string # 使用虚拟服务器组
虚拟服务器组
将多个虚拟服务器定义成一个组,统一对外服务,如:http和https定义成一个虚拟服务器组
参考文档:
/apps/keepalived/etc/keepalived/samples/
虚拟服务器配置
rs是后端服务器、vs是lvs服务器
virtual_server IP port { # VIP和PORT delay_loop <INT> # 检查后端服务器的时间间隔 lb_algo rr|wrr|lc|wlc|lblc|sh|dh # 定义调度算法 lb_king NAT|DR|TUN # 集群的类型,注意要大写 persistence_timeout <INT> # 持续连接时长 protocol TCP|UDP|SCTP # 指定服务协议,一般为TCP sorry_server <IPADDR> <PORT> # 所有RS故障时,备用服务器地址。 real_server <IPADDR> <PORT> { # RS的IP和PORT weight <INT> # RS权重 notify_up <STRING>|<QUOTED-STRING> # RS上线通知脚本 HTTP_GET|SSL_GET|TCP_CHECK|SMTP_CHECK|MISC|CHECK {...} # 定义当前主机健康状态检测方法 } } # 注意:括号必须分行写,两个括号写在行,如:}} 会出错
应用层监测
应用层检测:HTTP_GET|SSL_GET`
HTTP_GET|SSL_GET { url { path <URL_PATH> # 定义要监控的URL status_code <INT> # 判断上述检测机制为健康状态的响应码,一般为200 } connect_timeout <INTEGER> # 客户端请求的超时时长,相当于haproxy的timeout server nb_get_retry <INT> # 重试次数 delay_before_retry <INT> # 重试之前的延迟时长 connect_ip <IP ADDRESS> # 向当前RS哪个IP地址发起健康状态检测请求 connect_port <PORT> # 向当前RS的那个port发起健康状态检测请求 bindto <IP ADDRESS> # 向当前RS发出健康状态检测请求时使用的源地址 bind_port <PORT> # 向当前RS发出健康状态检测请求时使用的源端口 }
范例
vim /etc/keepalived/conf.d/lvs_web.conf virtual_server 10.0.0.100 80 { delay_loop 3 lb_algo rr lb_kind DR protocol TCP sorry_server 127.0.0.1 80 real_server 10.0.0.7 80 { weight 1 HTTP_GET { url { path /monitor.html status_code 200 } connect_timeout 1 nb_get_retry 3 delay_before_retry 1 } } real_server 10.0.0.17 80 { weight 1 HTTP_GET { url { path / status_code 200 } connect_timeout 1 nb_get_retry 3 delay_before_retry 1 } } }
传输层(TCP)监测
基于TCP监测
传输层检测:TCP_CHECK
TCP_CHECK { connect_ip <IP ADDRESS> # 向当前RS哪个IP地址发起健康状态检测请求 connect_port <PORT> # 向当前RS的那个port发起健康状态检测请求 bindto <IP ADDRESS> # 向当前RS发出健康状态检测请求时使用的源地址 bind_port <PORT> # 向当前RS发出健康状态检测请求时使用的源端口 connect_timeout <INTEGER> # 客户端请求的超时时长,相当于haproxy的timeout server }
范例
virtual_server 10.0.0.10 80 { delay_loop 6 lb_algo wrr lb_kind DR # persistence_timeout 120 # 会话保持时间 protocol TCP sorry_server 127.0.0.1 80 real_server 10.0.0.7 80 { weight 1 TCP_CHECK { connect_timeout 5 nb_get_retry 3 delay_before_retry 3 connect_port 80 } } real_server 10.0.0.17 80 { weight 1 TCP_CHECK { connect_timeout 5 nb_get_retry 3 delay_before_retry 3 connect_port 80 } } }
实战案例
实现单主的LVS-DR模式
lvs客户端准备好脚本
rs为lvs的客户端,vs为lvs的服务端
vim /root/lvs-dr-rs.sh #!/bin/bash # LVS DR默认初始化脚本 LVS_VIP=10.0.0.100/32 DEV=lo:1 # source /etc/rc.d/init.d/functions case "$1" in start) /sbin/ifconfig lo:0 $LVS_VIP netmask 255.255.255.255 # broadcast $LVS_VIP # /sbin/route add -host $LVS_VIP dev lo:0 echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce sysctl -p > /dev/null 2>&1 echo "RealServer RS Start OK" ;; stop) /sbin/ifconfig lo:0 down # /sbin/route del $LVS_VIP > /dev/null 2>&1 echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce sysctl -p > /dev/null 2>&1 echo "RealServer RS Stoped" ;; *) echo "Usage: $0 {start|stop}" exit 1 esac exit 0 # 启动脚本 bash lvs-dr-rs.sh start
k1与k2配置keepalived的lvs
cd /etc/keepalived/conf.d/ vim lvs_web1.conf virtual_server 10.0.0.100 80 { delay_loop 3 lb_algo wrr lb_kind DR protocol TCP sorry_server 127.0.0.1 80 real_server 10.0.0.7 80 { weight 1 HTTP_GET { # 应用层检测 url { path /monitor.html status_code 200 } connect_timeout 1 nb_get_retry 3 delay_before_retry 3 } } real_server 10.0.0.17 80 { weight 1 TCP_CHECK { # 另一台主机使用TCP检测 connect_timeout 5 nb_get_retry 3 delay_before_retry 3 connect_port 80 } } } # k1 scp lvs_web1.conf 10.0.0.18:/etc/keepalived/cond.d/ systemctl restart keepalived
web1
yum install -y httpd echo 10.0.0.7-monitor.html > /var/www/html/monitor.html echo 10.0.0.7 > /var/www/html/index.html systemctl enable --now httpd
web2
yum install -y httpd echo 10.0.0.17-monitor.html > /var/www/html/monitor.html echo 10.0.0.17 > /var/www/html/index.html systemctl enable --now httpd
访问测试k1/k2
查看lvs规则是否生成 ipvsadm -Ln curl 10.0.0.100
测试keepalived的健康检测问题
web1停止httpd服务,查看是否在web1上调度 # web1 systemctl stop httpd # client curl 10.0.0.100 发现不会在宕掉的节点上调度了
得到结果,lvs高可用与健康检查都解决了
lvs监听的地址与端口是在内核中的,比http应用层的优先级要高
实现双主的LVS-DR模式
规划
10.0.0.100做apache的vip
10.0.0.200做mysql的vip
web1与web2操作
yum install -y mariadb-server; systemctl enable --now mariadb mysql -e "grant all on *.* to test@'10.0.0.%' identified by '123456'"
client
yum install -y mariadb mysql -utest -h10.0.0.7 -p123456 -e "show variables like '%hostname%'" mysql -utest -h10.0.0.17 -p123456 -e "show variables like '%hostname%'"
ka1
cd /etc/keepalived/conf.d/ vim mysql_vip.conf vrrp_instance mysql { state BACKUP interface eth0 virtual_router_id 88 priority 80 advert_int 1 authentication { auth_type PASS auth_pass 123456 } virtual_ipaddress { 10.0.0.200/24 dev eth0 label eth0:200 } }
ka2
cd /etc/keepalived/conf.d/ vim mysql_vip.conf vrrp_instance mysql { state MASTER interface eth0 virtual_router_id 88 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 123456 } virtual_ipaddress { 10.0.0.200/24 dev eth0 label eth0:200 } }
ka1配置keepalived的lvs规则
cd /etc/keepalived/conf.d/ vim lvs_mysql.conf virtual_server 10.0.0.200 3306 { delay_loop 3 lb_algo wrr lb_kind DR protocol TCP sorry_server 127.0.0.1 33006 real_server 10.0.0.7 3306 { weight 1 TCP_CHECK { # 另一台主机使用TCP检测 connect_timeout 5 nb_get_retry 3 delay_before_retry 3 connect_port 3306 } } real_server 10.0.0.17 3306 { weight 1 TCP_CHECK { # 另一台主机使用TCP检测 connect_timeout 5 nb_get_retry 3 delay_before_retry 3 connect_port 3306 } } }
ka2配置keepalived的lvs规则
cd /etc/keepalived/conf.d/ vim lvs_mysql.conf virtual_server 10.0.0.200 3306 { delay_loop 3 lb_algo wrr lb_kind DR protocol TCP sorry_server 127.0.0.1 33006 real_server 10.0.0.7 3306 { weight 1 TCP_CHECK { # 另一台主机使用TCP检测 connect_timeout 5 nb_get_retry 3 delay_before_retry 3 connect_port 3306 } } real_server 10.0.0.17 3306 { weight 1 TCP_CHECK { # 另一台主机使用TCP检测 connect_timeout 5 nb_get_retry 3 delay_before_retry 3 connect_port 3306 } } }
mysql1与mysql2
ifconfig lo:2 10.0.0.200/32
防火墙标签绑定多个服务
两个节点都执行以下操作
# mangle是死的,不能改 iptables -t mangle -A PREROUTING -d 10.0.0.100 -p tcp -m multiport --dports 80,443 -j MARK --set-mark 6 iptables -t mangle -vnL
配置keepalived文件(k1、k2)
vim /etc/keepalived/conf.d/web1_vip.conf vrrp_instance web1 { state MASTER interface eth0 virtual_router_id 66 priority 100 advert_int 1 virtual_ipaddress { 10.0.0.100/24 dev eth0 label eth0:100 } notify_master "/etc/keepalived/notify.sh master" notify_backup "/etc/keepalived/notify.sh backup" notify_fault "/etc/keepalived/notify.sh fault" } vim /etc/keepalived/conf.d/web1.conf virtual_server fwmark 6 { # 指定FWM为6 delay_loop 3 lb_algo wrr lb_kind DR protocol TCP sorry_server 127.0.0.1 80 # 端口必须指定 real_server 10.0.0.7 80 { weight 1 HTTP_GET { url { path /monitor.html status_code 200 } connect_timeout 2 nb_get_retry 3 delay_before_retry 3 } } real_server 10.0.0.17 80 { weight 1 TCP_CHECK { connect_timeout 2 nb_get_retry 3 delay_before_retry 3 connect_port 80 } } }
访问测试
curl -k https://10.0.0.100/
实现其它应用的高可用性 VRRP Script
分两步实现
vrrp_script: 自定义资源监控脚本,vrrp实现根据脚本返回值,公共定义,可被多个实例调用,定义在vrrp实例之外的独立配置块,一般放在global_defs设置块之后
通常脚本用于监控指定应用的状态。一旦发现应用的状态异常,则触发对MASTER节点的权重减至低于SLAVE节点,从而实现VIP切换到SLAVE节点
vrrp_script <SCRIPT_NAME> { script <STRING>|<QUOTED-STRING> # 此脚本返回值为非0时,会触发下面OPTIONS执行 OPTIONS }
调用脚本
track_script:调用vrrp_script定义的脚本去监控资源,定义在VRRP实例之内,调用事先定义的vrrp_script
track_script { SCRIPT_NAME_1 SCRIPT_NAME_2 }
定义vrrp script
vrrp_script <SCRIPT_NAME> { # 定义一个检测脚本,在 global_defs 之外配置 script <STRING>|<QUOTED-STRING> # shell命令或脚本路径 interval <INTEGER> # 间隔时间,单位为秒,默认1秒 timeout <INTEGER> # 超时时间 weight <INTEGER:-254..254> # 默认为0,如果设置此值为负数,当上面脚本返回值为非0时,会将此值与本节点权重相加可以降低本节点权重,即表示fall,如果是正数,当脚本返回值为0,会将此值与本节点权重相加可以提高本节点权重,即表示 rise ,通常使用负值 fall <INTEGER> # 脚本几次失败转为失败,建议设为2以上 rise <INTEGER> # 脚本连续监测成功后,把服务器从失败标记为成功的次数。将优先级恢复 user USERNAME [GROUPNAME] # 执行监测脚本的用户或组 init_fail # 设置默认标记为失败状态,监测成功之后再转换为成功状态 }
调用 VRRP script
vrrp_instance VI_1 { ''' track_script { chk_down } }
范例
先恢复为一主一从的环境
k1
vim /etc/keepalived/conf.d/web_vip.conf vrrp_instance web1 { state MASTER interface eth0 virtual_router_id 66 priority 100 # 从设置为80 advert_int 1 virtual_ipaddress { 10.0.0.100/24 dev eth0 label eth0:100 } notify_master "/etc/keepalived/notify.sh master" notify_backup "/etc/keepalived/notify.sh backup" notify_fault "/etc/keepalived/notify.sh fault" }
vim /etc/keepalived/conf.d/check_down.conf vrrp_script check_down { script "[ ! -f /etc/keepalived/down ]" interval 1 weight -30 fall 3 rise 2 timeout 2 }
ka1调用脚本
vim /etc/keepalived/conf.d/web_vip.conf vrrp_instance web1 { state MASTER interface eth0 virtual_router_id 66 priority 100 # 从设置为80 advert_int 1 virtual_ipaddress { 10.0.0.100/24 dev eth0 label eth0:100 } notify_master "/etc/keepalived/notify.sh master" notify_backup "/etc/keepalived/notify.sh backup" notify_fault "/etc/keepalived/notify.sh fault" track_script { check_down # 上面脚本配置文件定义的名称 } } systemctl restart keeepalived
- client抓包测试
tcpdump -i eth0 host 224.1.1.1 # ka1创建文件 touch /etc/keepalived/down # 查看优先级是不是被减了30,如果是那么脚本设置成功 # ka1再删除文件查看vip是否能回来 rm -f /etc/keepalived/down
nginx的反向代理高可用
标签:10.0,keepalived,etc,vrrp,conf,eth0 From: https://www.cnblogs.com/wsxier/p/17278348.html
利用脚本文件的方法
k1与k2准备环境
dnf install -y nginx vim /etc/nginx/nginx.conf '''' http { upstream webservers { server 10.0.0.7:80; server 10.0.0.17:80; } server { '''' location / { proxy_pass http://webservers/; } } } # 将lvs配置文件去掉,只配置vip,因为这里使用nginx作为反向代理 cat /etc/keepalived/conf.d/web1_vip.conf vrrp_instance web1 { state MASTER interface eth0 virtual_router_id 66 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 123456 } unicast_src_ip 10.0.0.8 unicast_peer { 10.0.0.18 } virtual_ipaddress { 10.0.0.100 dev eth0 label eth0:1 } notify_master "/etc/keepalived/notify.sh master" notify_backup "/etc/keepalived/notify.sh backup" notify_fault "/etc/keepalived/notify.sh fault" } systemctl restart nginx keepalived
client测试
curl 10.0.0.100
虽然keepalived可以完成自动切换了,但是当nginx挂了,nginx反向代理也就失效,vip也不会发生切换,现在配置检测脚本解决问题
思路:检测nginx进程是否存在
pgrep nginx # 发送0信号 killall -0 nginx echo $? 0 正常 非零不正常
k1与k2编写keepalived脚本
cd /etc/keepalived/ vim check_nginx.conf vrrp_script check_nginx { script "/etc/keepalived/conf.d/check_nginx.sh" interval 1 weight -30 fall 3 rise 2 timeout 2 } vim /etc/keepalived/conf.d/check_nginx.sh #!/bin/bash /usr/bin/killall -0 nginx &> /dev/null chmod +x /etc/keepalived/conf.d/check_nginx.sh vim /etc/keepalived/conf.d/web_vip.conf vrrp_instance web1 { state MASTER interface eth0 virtual_router_id 66 priority 100 # 从设置为80 advert_int 1 virtual_ipaddress { 10.0.0.100/24 dev eth0 label eth0:100 } notify_master "/etc/keepalived/notify.sh master" notify_backup "/etc/keepalived/notify.sh backup" notify_fault "/etc/keepalived/notify.sh fault" track_script { check_nginx } }
测试
# 监控 tcpdump -i eth0 host 224.1.1.1 # k1中将nginx服务停止,查看优先级和vip是否发送变动 killall nginx # client是否有通 curl 10.0.0.100 # ka1再将nginx服务起来,查看优先级和vip是否能恢复 systemctl start nginx
更改为能自愈的脚本
vim /etc/keepalived/conf.d/check_nginx.sh #!/bin/bash /usr/bin/killall -0 nginx &> /dev/null || systemctl restart nginx systemctl is-active nginx