首页 > 系统相关 >Zabbix监控nginx高可用是否脑裂

Zabbix监控nginx高可用是否脑裂

时间:2024-03-13 19:29:06浏览次数:21  
标签:00 lb2 lft keepalived 192.168 nginx Zabbix 脑裂 root

Zabbix监控nginx高可用是否脑裂

实验环境

所有机关闭防火墙与selinux

第一台机zabbix 192.168.159.141 lamp、zabbix_server、zabbix_agentd

第二台机lb1 192.168.159.139 keepalived、nginx_master负载均衡rs1和rs2的网页测试页面

第三台机lb2 192.168.159.147 keepalived、nginx_slave负载均衡rs1和rs2的网页测试页面、zabbix_agentd

第四台机rs1 192.168.159.148 nginx网页测试页面

第五台机rs2 192.168.159.149 nginx网页测试页面

本次高可用虚拟IP(VIP)地址暂定为 192.168.159.250

keepalived安装
配置主keepalived
[root@lb1 ~]# yum -y install keepalived
编写配置文件
[root@lb1 ~]# cd /etc/keepalived/ 
[root@lb1 keepalived]# ls
keepalived.conf
[root@lb1 keepalived]# mv keepalived.conf{,.bak}
[root@lb1 keepalived]# ls
keepalived.conf.bak
[root@lb1 keepalived]# vim keepalived.conf
[root@lb1 keepalived]# cat keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id lb01
}

vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 71
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 123456
    }
    virtual_ipaddress {
        192.168.159.250
    }
}

virtual_server 192.168.159.250 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP

    real_server 192.168.159.139 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 192.168.159.147 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

启动服务发现有vip了
[root@lb1 ~]# systemctl enable --now keepalived
[root@lb1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:b6:d6:ff brd ff:ff:ff:ff:ff:ff
    altname enp2s1
    inet 192.168.159.139/24 brd 192.168.159.255 scope global dynamic noprefixroute ens33
       valid_lft 1348sec preferred_lft 1348sec
    inet 192.168.159.250/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:feb6:d6ff/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

用同样的方法在备服务器上安装keepalived

[root@lb2 ~]# yum -y install keepalived
[root@lb2 ~]# cd /etc/keepalived/ 
[root@lb2 keepalived]# ls
keepalived.conf
[root@lb2 keepalived]# mv keepalived.conf{,.bak}
[root@lb2 keepalived]# vim keepalived.conf
[root@lb2 keepalived]# cat keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id lb02
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens160
    virtual_router_id 71
    priority 80
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 123456
    }
    virtual_ipaddress {
        192.168.159.250
    }
}

virtual_server 192.168.159.250 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP

    real_server 192.168.159.139 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 192.168.159.147 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
[root@lb2 keepalived]# systemctl enable --now keepalived
Created symlink /etc/systemd/system/multi-user.target.wants/keepalived.service → /usr/lib/systemd/system/keepalived.service.
[root@lb2 keepalived]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:0c:29:f6:3c:cb brd ff:ff:ff:ff:ff:ff
    altname enp3s0
    inet 192.168.159.147/24 brd 192.168.159.255 scope global dynamic noprefixroute ens160
       valid_lft 1141sec preferred_lft 1141sec
    inet6 fe80::20c:29ff:fef6:3ccb/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

在浏览器上访问试试,确保nginx负载均衡服务能够正常访问
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

备节点上需要停止nginx服务主节点上的vip才能访问的到

[root@lb2 ~]# systemctl stop nginx

在这里插入图片描述
在这里插入图片描述

让keepalived监控nginx负载均衡机

keepalived通过脚本来监控nginx负载均衡机的状态

在lb1上编写脚本

[root@lb1 ~]# mkdir /scripts
[root@lb1 ~]# cd /scripts/
[root@lb1 scripts]# vim check_nginx.sh
[root@lb1 scripts]# chmod +x check_nginx.sh 
[root@lb1 scripts]# vim notify.sh
[root@lb1 scripts]# cat notify.sh 
#!/bin/bash

case "$1" in
  master)
    nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep '\bnginx\b'|wc -l)
    if [ $nginx_status -lt 1 ];then
	systemctl start nginx
	fi
  ;;
  backup)
    nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep '\bnginx\b'|wc -l)
	if [ $nginx_status -gt 0 ];then
	systemctl stop nginx
    fi
  ;;
  *)
    echo "Usage:$0 master|backup VIP"
  ;;
esac

将此脚本传给备节点,主节点不用,只做备份
[root@lb1 scripts]# scp notify.sh 192.168.159.147:/scripts/
The authenticity of host '192.168.159.147 (192.168.159.147)' can't be established.
ED25519 key fingerprint is SHA256:bkL+H8KDU3f4oa6FUb2+zdbsK+6fCEjjgbuaDzWjdoE.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.159.147' (ED25519) to the list of known hosts.
[email protected]'s password: 
notify.sh                                                100%  372   315.7KB/s   00:00   

lb2上

[root@lb2 ~]# cd /scripts/
[root@lb2 scripts]# ls
check_process.sh  log.py  notify.sh
[root@lb2 scripts]# chmod +x notify.sh 
[root@lb2 scripts]# ls
check_process.sh  log.py  notify.sh

配置keepalived加入监控脚本的配置

配置主keepalived

[root@lb1 ~]# vim  /etc/keepalived/keepalived.conf
[root@lb1 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id lb01
}

vrrp_script nginx_check {
    script "/scripts/check_nginx.sh"
    interval 1
    wgight -30
}

vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 71
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 123456
    }
    virtual_ipaddress {
        192.168.159.250
    }
    track_script {
        nginx_check
    }
}

virtual_server 192.168.159.250 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP

    real_server 192.168.159.139 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 192.168.159.147 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

[root@lb1 ~]# systemctl restart keepalived

配置备keepalived

[root@lb2 ~]# vim /etc/keepalived/keepalived.conf
[root@lb2 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id lb02
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens160
    virtual_router_id 71
    priority 80
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 123456
    }
    virtual_ipaddress {
        192.168.159.250
    }
    notify_master "/scripts/notify.sh master"
    notify_backup "/scripts/notify.sh backup"
}

virtual_server 192.168.159.250 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP

    real_server 192.168.159.139 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 192.168.159.147 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

[root@lb2 ~]# systemctl restart keepalived

此时的效果

在lb1上停掉nginx服务,keepalived服务也会停止,同时vip会出现在lb2上,lb2上的nginx服务和keepalived服务也会启动

[root@lb1 ~]# systemctl stop nginx
[root@lb1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:b6:d6:ff brd ff:ff:ff:ff:ff:ff
    altname enp2s1
    inet 192.168.159.139/24 brd 192.168.159.255 scope global dynamic noprefixroute ens33
       valid_lft 962sec preferred_lft 962sec
    inet6 fe80::20c:29ff:feb6:d6ff/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

[root@lb2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:0c:29:f6:3c:cb brd ff:ff:ff:ff:ff:ff
    altname enp3s0
    inet 192.168.159.147/24 brd 192.168.159.255 scope global dynamic noprefixroute ens160
       valid_lft 944sec preferred_lft 944sec
    inet 192.168.159.250/32 scope global ens160
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fef6:3ccb/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
[root@lb2 ~]# ss -antl
State     Recv-Q    Send-Q       Local Address:Port        Peer Address:Port    Process    
LISTEN    0         128                0.0.0.0:22               0.0.0.0:*                  
LISTEN    0         511                0.0.0.0:80               0.0.0.0:*                  
LISTEN    0         4096               0.0.0.0:10050            0.0.0.0:*                  
LISTEN    0         128                   [::]:22                  [::]:*                  
LISTEN    0         511                   [::]:80                  [::]:*                 

对keepalived进行监控

对keepalived服务的监控应在备用服务器上进行,通过添加zabbix自定义监控进行。

监控的信息是备上面有无VIP地址(192.168.159.250)

所有要在在lb2上编写监控脚本

[root@lb2 ~]# cd /scripts/
[root@lb2 scripts]# ls
check_process.sh  log.py  notify.sh
[root@lb2 scripts]# vim check_keepalived.sh
[root@lb2 scripts]# chmod +x check_keepalived.sh 
[root@lb2 scripts]# cat check_keepalived.sh 
#!/bin/bash

if [ `ip a show ens160 | grep 192.168.159.250 | wc -l` -ne 0 ]
then
	    echo "1"
    else
	        echo "0"
fi

[root@lb2 scripts]# ll check_keepalived.sh 
-rwxr-xr-x 1 root root 125 Mar  4 17:07 check_keepalived.sh
[root@lb2 scripts]# ./check_keepalived.sh
1
此处显示1就是说明备上面有vip,显示0则说明备上面没有vip

进入配置文件,创建自定义监控任务
[root@lb2 scripts]# vim /usr/local/etc/zabbix_agentd.conf
添加自定义监控任务
UserParameter=check_keepalived,/bin/bash /scripts/check_keepalived.sh
因为我们修改了配置文件,所以需要重启服务,重新读取配置文件内容
[root@lb2 scripts]# systemctl restart zabbix_agentd.service

创建自定义监控任务后,我们需要在server端去测试一下是否能接受到被监控端的值

[root@zabbix ~]# zabbix_get -s 192.168.159.147 -k check_keepalived
1
成功接收到值,因为备上此时有vip所有显示的是1

在zabbix监控页面添加监控

创建监控项
在这里插入图片描述

成功创建
在这里插入图片描述

创建触发器
在这里插入图片描述

因为此时我的主服务器上的nginx服务已经关闭,所以vip在备上面,此时提取出来的值是1,所以服务异常

在这里插入图片描述

标签:00,lb2,lft,keepalived,192.168,nginx,Zabbix,脑裂,root
From: https://blog.csdn.net/weixin_65309423/article/details/136669044

相关文章

  • keepalived+nginx
    【转载于https://blog.csdn.net/chenshuai199533/article/details/124791176】keepalived是什么keepalived是集群管理中保证集群高可用(HA)的一个服务软件,其功能类似于heartbeat,用来防止单点故障。keepalived是以VRRP协议为实现基础的,当backup收不到vrrp包时就认为master宕......
  • nginx负载均衡
    nginx负载均衡使用客户端的真实ip进行hash在经过多层代理后,ip_hash获取到的是服务器的ip,客户端真实ip需要从$http_x_forwarded_for获取;在http模块下增加map模块,返回客户端ip,在upstream模块中使用map的第二个参数进行hash;http{#参数映射(根据第一个属性的值,从下方列表......
  • OBS+Nginx+VLC推拉流
    目录概述环境准备安装Nginx安装OBS安装VLC操作步骤Nginx添加rtmp配置使用OBS推流设置推流来源设置推流地址开始直播使用VLC拉流附图概述推拉流分为推流和拉流。推流就是将client端的视频画面推送到流媒体服务器;拉流就是另外一个client端从流媒体服务器获取视频画面。本文中,使用......
  • 46_docker-compose_nginx
    1.安装Docker-composecurl-L"https://github.com/docker/compose/releases/download/v2.17.2/docker-compose-$(uname-s)-$(uname-m)"-o/usr/local/bin/docker-composechmod+x/usr/local/bin/docker-composeln-s/usr/local/bin/docker-compose/usr/b......
  • Nginx安装nginx-rtmp-module模块
    简介nginx中的模块虽然就是类似插件的概念,但是它无法像VsCode那样轻松的安装扩展。nginx要安装其它模块必须同时拿到nginx源代码和模块源代码,然后手动编译,将模块打到nginx中,最终生成一个名为nginx的可执行文件。流程查看当前nginx的版本(假设安装位置为:/usr/local/nginx)下......
  • Rust 构建开源 Pingora 框架可以与nginx媲美
    一、概述Cloudflare为何弃用Nginx,选择使用Rust重新构建新的代理Pingora框架。Cloudflare成立于2010年,是一家领先的云服务提供商,专注于内容分发网络(CDN)和分布式域名解析。它提供一系列安全和性能优化服务,包括防火墙、DDoS防护、SSL/TLS加密和威胁分析。二、Pingora......
  • 如何下载Nginx,部署并设置自启动
    Nginx安装:将程序里的Nginx压缩包进行解压,并将里面的conf文件进行替换将前端打好的包,放在Nginx目录下的html文件夹下面,并在conf文件配置好root的路由地址例如:nginx-1.24.0/html/life/dist需要创建在原来Nginxhtml目录下创建一个life文件夹,然后将前端打包的dist放进去配......
  • zabbix直接ip访问web前端
    1、修改配置文件, sudovim/etc/apache2/sites-available/000-default.conf把之前的DocumentRoot/var/www/html这行注释掉,另起一行输入 DocumentRoot/usr/share/zabbix 2、重启Apache、zabbix-server服务sudosystemctlrestartapache2.servicezabbix-server.serv......
  • 利用Nginx正向代理实现局域网电脑访问外网
    引言在网络环境中,有时候我们需要让局域网内的电脑访问外网,但是由于网络策略或其他原因,直接访问外网是不可行的。这时候,可以借助Nginx来搭建一个正向代理服务器,实现局域网内电脑通过Nginx转发访问外网的需求。在工作中我遇到了一个类似的情况:在公司网络中,由于管理要求,局域网......
  • Nginx 学习
    1.1Nginx是什么?什么场景需要?Nginx到底是什么?是高性能HTTP和反向代理的web服务器Nginx做什么事情?主流应用,HTTP服务器,Web服务器。(性能高,非常注重效率,能够经受高负载考验,支出5w并发数,CPU占用、内存占用低)反向代理服务器负载均衡服务器动静分离(url和前端)Nginx特点占用内......