首页 > 其他分享 >GlusterFS+Keepalived实现存储高可用

GlusterFS+Keepalived实现存储高可用

时间:2022-11-22 14:25:32浏览次数:46  
标签:node 存储 Keepalived keepalived data GlusterFS ff test root

 

环境准备

1.服务器列表信息

Ip

Hostname

存储

系统

Vip

192.168.1.42

data-node-01

/dev/sdb1

Centos7

192.168.1.99

说明:用于对外提供存储服务

192.168.1.49

data-node-02

/dev/sdb1

Centos7

192.168.1.51

web-node-12

客户端

Centos7

2./etc/hosts配置

192.168.1.42 data-node-01

192.168.1.49  data-node-02

建立GlusterFS存储挂载点(data-node-01和data-node-02都要做此操作)

mkdir -p /glusterfs/storage1

echo "/dev/sdb1    /glusterfs/storage1    xfs    defaults    0 0" >> /etc/fstab

mount -a

安装GlusterFS服务端软件

3.安装gluster源,并安装glusterfs及相关软件包(data-node-01和data-node-02都要做此操作)

yum install centos-release-gluster -y

yum install glusterfs glusterfs-server glusterfs-cli glusterfs-geo-replication glusterfs-rdma -y

4.启动Glusterd服务(data-node-01和data-node-02都要做此操作)

systemctl start glusterd

5.在任意一个节点上添加信任节点(这里我们选择data-node-01

gluster peer probe data-node-02

gluster peer probe data-node-01

 

gluster peer status

6.在任意一个节点上建立复制卷(这里我们选择data-node-01)

mkdir /glusterfs/storage1/rep_vol1      //分别在data-node-01和data-node-02上创建目录

gluster volume create rep_vol1 replica 2 data-node-01:/glusterfs/storage1/rep_vol1 data-node-02:/glusterfs/storage1/rep_vol1

7.启动复制卷

gluster volume start rep_vol1

8.查看复制卷状态

gluster volume status

gluster volume info

安装GlusterFS客户端软件(web-node-12上的操作)

9.客户端安装GlusterFS客户端软件

yum install glusterfs-fuse

10.客户端测试复制卷数据存储

for i in `seq -w 1 3`;do cp -rp /var/log/messages /data/test-$i;done

[root@localhost ~]# ls /data/

111  1.txt  2.txt  anaconda-ks.cfg  test-1  test-2  test-3

安装与配置Keepalived

11.安装Keepalived

yum -y install keepalived

12.启动keepalived服务

systemctl start keepalived

13.主节点keepalived配置

! Configuration File for keepalived

 

global_defs {

   notification_email {

       [email protected]

   }

   notification_email_from [email protected]

   smtp_server 127.0.0.1

   smtp_connect_timeout 30

   router_id GFS_HA_MASTER

   vrrp_skip_check_adv_addr

}

 

vrrp_sync_group GFS_HA_GROUP {

    group {

        GFS_HA_1

    }

}

vrrp_script monitor_glusterfs_status {

    script "/etc/keepalived/scripts/monitor_glusterfs_status.sh"

    interval 5

    fall 3

    rise 1

    weight 20

}

vrrp_instance GFS_HA_1 {

    state BACKUP

    interface ens34

    virtual_router_id 107

    priority 100

    advert_int 2

    nopreempt

    authentication {

        auth_type PASS

        auth_pass 11112222

    }

 

    virtual_ipaddress {

        192.168.1.99/24  dev ens34

    }

 

    track_script {

        monitor_glusterfs_status

    }

 

    track_interface {

        ens34

    }

    notify_master "/etc/keepalived/scripts/keepalived_notify.sh master"

    notify_backup "/etc/keepalived/scripts/keepalived_notify.sh backup"

    notify_fault  "/etc/keepalived/scripts/keepalived_notify.sh fault"

    notify_stop   "/etc/keepalived/scripts/keepalived_notify.sh stop"

}

14.备节点keepalived配置

! Configuration File for keepalived

 

global_defs {

   notification_email {

       [email protected]

   }

   notification_email_from [email protected]

   smtp_server 127.0.0.1

   smtp_connect_timeout 30

   router_id GFS_HA_MASTER

   vrrp_skip_check_adv_addr

}

 

vrrp_sync_group GFS_HA_GROUP {

    group {

        GFS_HA_1

    }

}

 

vrrp_script monitor_glusterfs_status {

    script "/etc/keepalived/scripts/monitor_glusterfs_status.sh"

    interval 5

    fall 3

    rise 1

    weight 20

}

 

vrrp_instance GFS_HA_1 {

    state BACKUP

    interface ens34

    virtual_router_id 107

    priority 90

    advert_int 2

    authentication {

        auth_type PASS

        auth_pass 11112222

    }

 

    virtual_ipaddress {

        192.168.1.99/24 dev ens34

    }

 

    track_script {

        monitor_glusterfs_status

    }

 

    track_interface {

        ens34

    }

    notify_master "/etc/keepalived/scripts/keepalived_notify.sh master"

    notify_backup "/etc/keepalived/scripts/keepalived_notify.sh backup"

    notify_fault  "/etc/keepalived/scripts/keepalived_notify.sh fault"

    notify_stop   "/etc/keepalived/scripts/keepalived_notify.sh stop"

}

15.keepalived vrrp监控脚本

cat /etc/keepalived/scripts/monitor_glusterfs_status.sh

#!/bin/bash

#check glusterfsd and glusterd process

 

systemctl status glusterd &>/dev/null

if [ $? -eq 0 ];then

    systemctl status glusterfsd &>/dev/null

    if [ $? -eq 0 ];then

        exit 0

    else

        exit 2

    fi

else

    systemctl start glusterd &>/dev/null

    systemctl stop keepalived &>/dev/null && exit 1

fi

16.keepalived通知脚本(管理Glusterd服务)

cat /etc/keepalived/scripts/keepalived_notify.sh

#!/bin/bash

#keepalived script for glusterd

 

master() {

    systemctl status glusterd

    if [ $? -ne 0 ];then

        systemctl start glusterd

    else

        systemctl restart glusterd

    fi

}

 

backup() {

    systemctl status glusterd

    if [ $? -ne 0 ];then

        systemctl start glusterd

    fi

}

 

case $1 in

    master)

        master

    ;;

    backup)

        backup

    ;;

    fault)

        backup

    ;;

    stop)

        backup

        systemctl restart keepalived

    ;;

    *)

        echo $"Usage: $0 {master|backup|fault|stop}"

esac

测试Keepalived自动接管GlusterFS服务及存储的可用性

17.重新启动keepalived服务

systemctl restart keepalived.service

18.查看VIP接管情况

## 节点1上

[root@data-node-01 ~]# ip a show dev ens34

3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

    link/ether 00:0c:29:b2:b5:2a brd ff:ff:ff:ff:ff:ff

    inet 192.168.1.42/24 brd 192.168.1.255 cope global ens34

       valid_lft forever preferred_lft forever

    inet 192.168.1.99/24 scope global secondary ens34

       valid_lft forever preferred_lft forever

    inet6 fe80::ce9a:ee2e:7b6c:a6bb/64 scope link

       valid_lft forever preferred_lft forever

 

## 节点2上

[root@data-node-02 ~]# ip a show dev ens34

3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

    link/ether 00:0c:29:ba:42:cf brd ff:ff:ff:ff:ff:ff

    inet192.168.1.49/24 brd 192.168.1.255 scope global ens34

       valid_lft forever preferred_lft forever

    inet6 fe80::e23:ce0:65c3:ffbf/64 scope link

       valid_lft forever preferred_lft forever

19.客户端上使用VIP挂载GlusterFS提供的复制卷,并测试是否可用

mount -t glusterfs 192.168.1.99:rep_vol1 /data/

 

[root@localhost ~]# ls /data/

111  1.txt  2.txt  anaconda-ks.cfg  test  test-1  test-2  test-3

 

[root@localhost ~]# mkdir /data/test

[root@localhost ~]# echo 1111 >/data/test/1.txt

[root@localhost ~]# ls /data/test

1.txt

[root@localhost ~]# cat /data/test/1.txt

1111

20.查看GluserFS节点复制卷的使用情况

[root@data-node-02 ~]# ls /glusterfs/storage1/rep_vol1/

111  1.txt  2.txt  anaconda-ks.cfg  test  test-1  test-2  test-3

21.测试GlusterFS服务故障转移, 将主节点(节点1)关机或重启,查看GlusterFS服务与VIP是否转移至节点2

root@data-node-01 ~]# reboot

 

[root@data-node-02 ~]# ip a show dev ens34

3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

    link/ether 00:0c:29:ba:42:cf brd ff:ff:ff:ff:ff:ff

    inet 192.168.1.49/24 brd 192.168.1.255 scope global ens34

       valid_lft forever preferred_lft forever

    inet 192.168.1.99/24 cope global secondary ens34

       valid_lft forever preferred_lft forever

    inet6 fe80::e23:ce0:65c3:ffbf/64 scope link

       valid_lft forever preferred_lft forever

22.在客户端上测试存储是否仍然可用

[root@localhost ~]# df -Th

Filesystem             Type            Size  Used Avail Use% Mounted on

/dev/mapper/cl-root    xfs              40G  1.2G   39G   3% /

devtmpfs               devtmpfs        1.9G     0  1.9G   0% /dev

tmpfs                  tmpfs           1.9G     0  1.9G   0% /dev/shm

tmpfs                  tmpfs           1.9G  8.6M  1.9G   1% /run

tmpfs                  tmpfs           1.9G     0  1.9G   0% /sys/fs/cgroup

/dev/sda1              xfs            1014M  139M  876M  14% /boot

tmpfs                  tmpfs           378M     0  378M   0% /run/user/0

192.168.1.99:rep_vol1 fuse.glusterfs   10G  136M  9.9G   2% /data

 

[root@localhost ~]# ls /data/

111  1.txt  2.txt  anaconda-ks.cfg  test  test-1  test-2  test-3

 

[root@localhost ~]# touch /data/test.log

[root@localhost ~]# ls -l /data/

total 964

drwxr-xr-x 3 root root   4096 Aug 27 21:58 111

-rw-r--r-- 1 root root     10 Aug 27 21:23 1.txt

-rw-r--r-- 1 root root      6 Aug 27 21:36 2.txt

-rw------- 1 root root   2135 Aug 27 21:44 anaconda-ks.cfg

drwxr-xr-x 2 root root   4096 Aug 27 22:59 test

-rw------- 1 root root 324951 Aug 27 21:23 test-1

-rw------- 1 root root 324951 Aug 27 21:23 test-2

-rw------- 1 root root 324951 Aug 27 21:23 test-3

-rw-r--r-- 1 root root      0 Aug 27 23:05 test.log

23.查看节点1状态

[root@data-node-01 ~]# ip a show dev ens34

3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

    link/ether 00:0c:29:b2:b5:2a brd ff:ff:ff:ff:ff:ff

    inet 192.168.1.42/24 brd 192.168.1.255 scope global ens34

       valid_lft forever preferred_lft forever

    inet6 fe80::ce9a:ee2e:7b6c:a6bb/64 scope link

       valid_lft forever preferred_lft forever

24.重新启动keepalived服务

[root@data-node-01 ~]# systemctl start keepalived.service

 

说明总结:

由此可见当节点1故障恢复后,节点1的keepalived会进入到备用状态,同时继续监管GlusterFS服务,当节点2故障时会将服务、存储和VIP切换到节点1,继续对外提供存储服务,从而实现存储的高可用

标签:node,存储,Keepalived,keepalived,data,GlusterFS,ff,test,root
From: https://www.cnblogs.com/wutao-007/p/16914965.html

相关文章

  • 【Mybatis学习总结七】调用存储过程
    今天这节课本来可以一小时结束的,我却从三点半搞到了九点。我觉得我是世界上最S13的人!!!没有之一!!!!一个小错害我花了一个晚上的时间去寻找,真是够无语的。好了,言归正传,还是先总结......
  • 使用fstab自动挂载网络存储
    介绍使用外置网络存储对小容量终端进行扩容很容易搜索到NFS的挂载方式这里介绍一些不常见的方法通过写如/etc/fstab实现开机自动mountssh类似于sftp的挂载方式,但需要......
  • LVS+Keepalived 高可用群集部署
    一、LVS+Keepalived高可用群集在这个高度信息化的IT时代,企业的生产系统、业务运营、销售和支持,以及日常管理等环节越来越依赖于计算机信息和服务,对高可用(HA)技术的应用......
  • keepalived脑裂问题及解决方案
    1.何为keepalived脑裂Keepalived的BACKUP主机在收到不MASTER主机报文后就会切换成为master,如果是它们之间的通信线路出现问题,无法接收到彼此的组播通知,但是两个节点实际都......
  • 系统架构与设计(7)- Kubernetes 的共享存储
    计算机存储系统由存放程序和数据的各类存储设备及有关的软件构成,是计算机系统的重要组成部分,用于存放程序和数据。存储系统分为内存储器和外存储器,两者按一定的结构,有机地......
  • 【详细教程】LVS+KeepAlived高可用部署实战应用
    1.构建高可用集群1.1什么是高可用集群 高可用集群(HighAvailabilityCluster,简称HACluster),是指以减少服务中断时间为目的得服务器集群技术。它通过保护用户得业务程序对......
  • Nacos注册中心概述、服务注册、分级存储模型及环境隔离
    目录​​一、Nacos概述​​​​二、服务注册到Nacos​​​​三、Nacos服务分级存储模型​​​​服务集群属性设置​​​​根据集群负载均衡​​​​根据权重负载均衡​​​......
  • LVS-DR+Keepalived
    一,LVS-DR工作原理1.数据包流向1.客户端发送请求到DirectorServer(负载均衡器),请求的数据报文到达内核空间。报文:源IP---------客户端的IP目标IP---......
  • 存储管理
    总览:概述:一个可执行文件是存放在磁盘中的可执行文件有个程序头表的区域程序头表:描述了可执行文件的区域与虚拟空间的一个区域之间的映射关系可执行文件装入系统执行......
  • postgis怎么存储z值?
    Postgis中坐标有几种的,最常见的是二维坐标xy。三维坐标,指带高程的z,即xyz,这也好理解。难以理解的是M,m是测量值,例如,假设一条道路长2公里,m为0.5时,点是在线的中点。那么......