一、前言
前段时间接到一个新的互联网医院项目,因当时资源有限,底层业务存储只能使用NFS,并且是单点,为了避免后续因单点造成数据丢失,需要对现有的环境进行改造,使其升级为双击热备,高可用NFS存储;
二、系统环境
节点角色 | 系统环境 | IP | 组件服务 |
---|---|---|---|
Matser | Centos7.0 | 10.10.203.180 | Rsync+Inotify\NFS+Keepalived |
Slave | Centos7.0 | 10.10.203.167 | Rsync+Inotify\NFS+Keepalived |
三、NFS高可用部署
- 安装部署NFS服务(Master、Slave两节点都需部署)
#yum install -y nfs-utils
2)创建NFS目录
#mkdir -p /data/nfs
- 编辑export文件
#vim /etc/exports
/data/nfs 10.10.203.0/24(fsid=0,rw,sync,no_root_squash,no_all_squash)
- 查看是否生效
#exportfs -r
5)启动rpcbind、nfs服务
#systemctl enable --now rpcbind
#systemctl enable --now nfs
- 验证nfs、rpcbind服务是否正常
#systemctl status nfs
#systemctl status rpcbind
四、安装部署Keepalived(两节点相同操作)
Master节点配置keepalived
- 安装keepalived
#yum install -y keepalived
- Matser节点配置keepalived.conf配置 Ps:设置为非抢占模式,如果设置抢占模式,那么当故障节点恢复之后VIP会立马重新回到旧节点上,因为VIP频繁的漂移可能会造成NFS数据丢失,所以说对于生产业务来说我们只需要在节点故障的时候才进行切换
# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf_bak
#vim /etc/keepalived/keepalived.conf
global_defs {
router_id nfs
}
vrrp_script chk_nfs {
script "/etc/keepalived/nfs_check.sh"
interval 2
weight -20
}
vrrp_instance VI_1 {
state BACKUP
interface ens192
virtual_router_id 61
priority 100
advert_int 1
nopreempt
authentication {
auth_type PASS
auth_pass 123456
}
track_script {
chk_nfs
}
virtual_ipaddress {
10.10.203.166/24
}
}
- 修改keepalived日志文件位置,便于后续日运维管理(两节点相同操作)
修改keepalived 日志存储位置,便于查看
#vim /etc/sysconfig/keepalived
KEEPALIVED_OPTIONS="-D" 改成KEEPALIVED_OPTIONS="-D -d -S 0"
修改rsyslog配置
#vim /etc/rsyslog.conf
local0.* /var/log/keepalived.log
重启生效,让日志文件以及keepalived服务生效
#systemctl restart rsyslog
#systemctl enable keepalived
Slave节点keepalived配置
#vim /etc/keepalived/keepalived.conf
global_defs {
router_id nfs
}
vrrp_script chk_nfs {
script "/etc/keepalived/nfs_check.sh"
interval 2
weight -20
}
vrrp_instance VI_1 {
state BACKUP
interface ens192
virtual_router_id 61
priority 100
advert_int 1
nopreempt
authentication {
auth_type PASS
auth_pass 123456
}
track_script {
chk_nfs
}
virtual_ipaddress {
10.10.203.166/24
}
}
- 编辑nfs_check.sh监控脚本(两节点相同操作)
# vi /etc/keepalived/nfs_check.sh
#!/bin/bash
for i in `seq 14`;do
counter=`ps -aux | grep '\[nfsd\]' | wc -l`
KEEP = `ps -ef | grep keepalived | wc -l`
if [ $counter -eq 0 ];then
sudo systemctl restart nfs
fi
sleep 2
counter=`ps -aux | grep '\[nfsd\]' | wc -l`
if [ $counter -eq 0 ];then
systemctl stop keepalived.service
else
if [ $KEEP -eq 0 ]; then
systemctl start keepalived
fi
fi
sleep 2
done
设置脚本执行权限
# chmod 755 /etc/keepalived/nfs_check.sh
- #启动keepalived服务
systemctl enable keepalived.service --now
五、安装Rsync+Inofity(Master、Slave两节点都需要操作)
- 安装rsync和Inotify Master节点配置rsyncd.conf
#yum install -y rsync inotify-tools
Master节点配置rsync.conf
#cp /etc/rsyncd.conf /etc/rsyncd.conf_bak
#vim /etc/rsyncd.conf
uid = root
gid = root
use chroot = no
port = 873
hosts allow = 10.10.203.0/24
max connections = 0
timeout = 300
pid file = /var/run/rsyncd.pid
lock file = /var/run/rsyncd.lock
log file = /var/log/rsyncd.log
log format = %t %a %m %f %b
transfer logging = yes
syslog facility = local3
[master_nfs]
path = /data/nfs/
comment = master_nfs
ignore errors
read only = no
list = no
auth users = nfs
secrets file = /opt/rsync_salve.pass
编辑密码和用户文件(格式为"用户名:密码")
# vim /opt/rsync_salve.pass
nfs:nfs123
编辑同步密码
该文件内容只需要填写从服务器的密码,例如这里从服务器配的用户名密码都是nfs:nfs123,则主服务器同步密码写nfs123,反之,同理
vi /opt/rsync.pass
nfs123
设置文件执行权限
# chmod 600 /opt/rsync_salve.pass
# chmod 600 /opt/rsync.pass
启动服务
# systemctl enable --now rsyncd
- Slave节点配置rsyncd.conf
#yum install -y rsync inotify-tools
编辑rsyncd配置文件,只需将master节点/etc/rsyncd.conf配置中[master_nfs]改成[slave_nfs]即可
# vim /etc/rsyncd.conf
uid = root
gid = root
use chroot = no
port = 873
hosts allow = 10.10.203.0/24
max connections = 0
timeout = 300
pid file = /var/run/rsyncd.pid
lock file = /var/run/rsyncd.lock
log file = /var/log/rsyncd.log
log format = %t %a %m %f %b
transfer logging = yes
syslog facility = local3
[slave_nfs]
path = /data/nfs/
comment = master_nfs
ignore errors
read only = no
list = no
auth users = nfs
secrets file = /opt/rsync_salve.pass
既然是高可用架构,那么不仅仅Master向Slave同步,Slave也会向Master节点同步数据
编辑密码和用户文件
# vim /opt/rsync_salve.pass
nfs:nfs123
编辑同步密码
该文件内容只需要填写从服务器的密码,例如这里从服务器配的用户名密码都是nfs:nfs123,则主服务器同步密码写nfs123,反之,同理
vi /opt/rsync.pass
nfs123
设置文件执行权限
# chmod 600 /opt/rsync_salve.pass
# chmod 600 /opt/rsync.pass
启动服务
# systemctl enable --now rsyncd
验证 手动验证下Master节点NFS数据同步到Slave节点,在Master节点的NFS共享目录下创建测试数据
# mkdir /data/nfs/test
# touch /data/nfs/{a,b}
手动同步Master节点的NFS共享目录数据到Slave节点的NFS共享目录下
# rsync -avzp --delete /data/nfs nfs@slaveIP::slave_nfs --password-file=/opt/rsync.pass
到Slave节点查看是否同步
# ls /data/nfs
六、设置Rsync+Inotify自动同步机制
- Master节点配置Inotify自动同步
创建rsync+Inotify脚本目录
#mkdir -p /usr/local/nfs_rsync/
编写自动同步脚本
# vim /usr/local/nfs_rsync/rsync_inotify.sh
#!/bin/bash
host=10.10.203.167 #修改为slave真实IP
src=/data/nfs/
des=slave_nfs
password=/opt/rsync.pass
user=nfs
inotifywait=/usr/bin/inotifywait
$inotifywait -mrq --timefmt '%Y%m%d %H:%M' --format '%T %w%f%e' -e modify,delete,create,attrib $src \
| while read files ;do
rsync -avzP --delete --timeout=100 --password-file=${password} $src $user@$host::$des
echo "${files} was rsynced" >>/tmp/rsync.log 2>&1
done
配置systemctl 管理自动同步脚本
- 编写systemctl启动脚本
# vim /usr/local/nfs_rsync/rsync_inotify_start.sh
#!/bin/bash
nohup=/usr/bin/nohup
nohup sh /usr/local/nfs_rsync/rsync_inotify.sh >> /var/log/rsynch.log 2>&1 &
- 编写systemctl停止脚本
#vim /usr/local/nfs_rsync/rsync_inotify_stop.sh
#!/bin/bash
for i in `ps -ef | grep rsync_inotify.sh | awk '{print $2}'`
do
kill -9 $i
done
- 编写systemctl管理程序
#vim /usr/lib/systemd/system/nfs_rsync.service
[Unit]
Description=rsync_inotify service
Documentation=This is a Minio Service.
[Service]
Type=forking
TimeoutStartSec=10
WorkingDirectory=/usr/local/nfs_rsync
User=root
Group=root
Restart=on-failure
RestartSec=15s
ExecStart=/usr/local/nfs_rsync/rsync_inotify_start.sh
ExecStop=/usr/local/nfs_rsync/rsync_inotify_stop.sh
[Install]
WantedBy=multi-user.target
- 启动Inotify程序
#systectl enable --now nfs_rsync.service
- Slave节点配置Inotify自动同步
1)在Slave节点编写自动同步脚本
#mkdir -p /usr/local/nfs_rsync/
# vim /usr/local/nfs_rsync/rsync_inotify.sh
#!/bin/bash
host=10.10.203.180 #修改为master真实IP
src=/data/nfs/
des=master_nfs
password=/opt/rsync.pass
user=nfs
inotifywait=/usr/bin/inotifywait
$inotifywait -mrq --timefmt '%Y%m%d %H:%M' --format '%T %w%f%e' -e modify,delete,create,attrib $src \
| while read files ;do
rsync -avzP --delete --timeout=100 --password-file=${password} $src $user@$host::$des
echo "${files} was rsynced" >>/tmp/rsync.log 2>&1
done
![image.png](/i/li/?n=2&i=images/202308/414914137f4f29833590731c213a77fcaa7a15.png?,size_14,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_30,g_se,x_10,y_10,shadow_20,type_ZmFuZ3poZW5naGVpdGk=)
2)配置systemctl 管理自动同步脚本
- 编写systemctl启动脚本
# vim /usr/local/nfs_rsync/rsync_inotify_start.sh
#!/bin/bash
nohup=/usr/bin/nohup
nohup sh /usr/local/nfs_rsync/rsync_inotify.sh >> /var/log/rsynch.log 2>&1 &
- 编写systemctl停止脚本
#vim /usr/local/nfs_rsync/rsync_inotify_stop.sh
#!/bin/bash
for i in `ps -ef | grep rsync_inotify.sh | awk '{print $2}'`
do
kill -9 $i
done
- 编写systemctl管理程序
#vim /usr/lib/systemd/system/nfs_rsync.service
[Unit]
Description=rsync_inotify service
Documentation=This is a Minio Service.
[Service]
Type=forking
TimeoutStartSec=10
WorkingDirectory=/usr/local/nfs_rsync
User=root
Group=root
Restart=on-failure
RestartSec=15s
ExecStart=/usr/local/nfs_rsync/rsync_inotify_start.sh
ExecStop=/usr/local/nfs_rsync/rsync_inotify_stop.sh
[Install]
WantedBy=multi-user.target
- 启动Inotify程序
#systectl enable --now nfs_rsync.service
七、配置Keepalived VIP监控脚本(Master和Slave节点同样配置)
脚本逻辑剖析: 该vip_monitor.sh脚本主要是通过VIP以及inotifywait进程判断是否终止Rsync+Inotify自动同步机制,如果VIP和inotifywait进程不存在,那么就会停止Inotify程序,避免没有VIP节点的服务也发生同步
1)编写VIP监控脚本
# vi /usr/local/vip_monitor/vip_monitor.sh
#!/bin/bash
while :
do
VIP_NUM=`ip addr|grep 10.90.12.30|wc -l`
RSYNC_INOTIRY_NUM=`ps -ef|grep /usr/bin/inotifywait|grep -v grep|wc -l`
if [ ${VIP_NUM} == 0 ];then
echo "VIP不在当前NFS节点服务器上" > /tmp/1.log
if [ ${RSYNC_INOTIRY_NUM} != 0 ];then
systemctl stop nfs_rsync.service
fi
else
echo "VIP在当前NFS节点服务器上" >/dev/null 2>&1
systemctl start nfs_rsync.service
fi
sleep 20
done
2) 配置systemctl管理VIP监控脚本
#mkdir -p /usr/local/vip_monitor/
- 编写systemctl启动脚本
# vim /usr/local/vip_monitor/vip_monitor_start.sh
#!/bin/bash
nohup=/usr/bin/nohup
$nohup sh /usr/local/vip_monitor/vip_monitor.sh >> /var/log/vip_monitor.log 2>&1 &
- 编写systemctl停止脚本
# vi /usr/local/vip_monitor/vip_monitor_stop.sh
#!/bin/bash
ps -ef | grep vip_monitor.sh | grep -v "grep" | awk '{print $2}' | xargs kill -9
- 编写systemctl管理程序
# vi /usr/lib/systemd/system/vip_monitor.service
[Unit]
Description=rsync_inotify service
Documentation=This is a Minio Service.
[Service]
Type=forking
TimeoutStartSec=10
WorkingDirectory=/usr/local/vip_monitor
User=root
Group=root
Restart=on-failure
RestartSec=15s
ExecStart=/usr/local/vip_monitor/vip_monitor_start.sh
ExecStop=/usr/local/vip_monitor/vip_monitor_stop.sh
[Install]
WantedBy=multi-user.target
3) 启动Inotify程序
systemctl enable vip_monitor.service --now
保证vip_monitor.sh监控脚本的进程一定要存在
END