首页 > 系统相关 >CentOS高可用运维案例之---配置bond0

CentOS高可用运维案例之---配置bond0

时间:2024-10-16 11:44:28浏览次数:1  
标签:bond0 CentOS 运维 node201 网卡 ff root network

案例说明:
在CentOS 7系统下,两个Server之间网卡部署bond0连接测试。

系统版本:

[root@node201 ~]# cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

网络架构:

bond绑定模式介绍:
mode=1(active-backup)(主-备份策略)
这个是主备模式,只有一块网卡是active,另一块是备用的standby,所有流量都在active链路上处理,交换机配置的是捆绑的话将不能工作,因为交换机往两块网卡发包,有一半包是丢弃的。
特点:只有一个设备处于活动状态,当一个宕掉另一个马上由备份转换为主设备。mac地址是外部可见得,从外面看来,bond的MAC地址是唯一的,以避免switch(交换机)发生混乱。
此模式只提供了容错能力;由此可见此算法的优点是可以提供高网络连接的可用性,但是它的资源利用率较低,只有一个接口处于工作状态,在有 N 个网络接口的情况下,资源利用率为1/N。

一、查看系统内核是否支持bond
如下所示,如果可以获取到信息,则系统内核支持bond:

[root@node201 ~]# modinfo bonding |more
filename:       /lib/modules/3.10.0-1160.118.1.el7.x86_64/kernel/drivers/net/bonding/bonding.ko.xz
author:         Thomas Davis, [email protected] and many others
description:    Ethernet Channel Bonding Driver, v3.7.1
version:        3.7.1
license:        GPL
alias:          rtnl-link-bond
retpoline:      Y
rhelversion:    7.9
srcversion:     B395E7507BE97AC98A6E886
depends:
intree:         Y
vermagic:       3.10.0-1160.118.1.el7.x86_64 SMP mod_unload modversions
signer:         CentOS Linux kernel signing key
sig_key:        7C:18:B6:12:D5:11:92:49:73:9A:2C:83:4F:26:1F:AC:0B:15:18:19
sig_hashalgo:   sha256
......

二、查看主机网卡信息

[root@node201 ~]# ip add sh
.......
4: enp0s9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:34:0a:8f brd ff:ff:ff:ff:ff:ff
    inet 192.168.56.115/24 brd 192.168.56.255 scope global noprefixroute dynamic enp0s9
  
5: enp0s10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:58:bd:ac brd ff:ff:ff:ff:ff:ff
    inet 192.168.56.114/24 brd 192.168.56.255 scope global noprefixroute dynamic enp0s10


[root@node201 network-scripts]# ethtool enp0s9
Settings for enp0s9:
        Supported ports: [ TP ]
        Supported link modes:   10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Full
        Supported pause frame use: No
        Supports auto-negotiation: Yes
        Supported FEC modes: Not reported
        Advertised link modes:  10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Full
        Advertised pause frame use: No
        Advertised auto-negotiation: Yes
        Advertised FEC modes: Not reported
        Speed: 1000Mb/s
        Duplex: Full
        Port: Twisted Pair
        PHYAD: 0
        Transceiver: internal
        Auto-negotiation: on
        MDI-X: off (auto)
        Supports Wake-on: umbg
        Wake-on: d
        Current message level: 0x00000007 (7)
                               drv probe link
        Link detected: yes


Settings for enp0s10:
        Supported ports: [ TP ]
        Supported link modes:   10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Full
        Supported pause frame use: No
        Supports auto-negotiation: Yes
        Supported FEC modes: Not reported
        Advertised link modes:  10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Full
        Advertised pause frame use: No
        Advertised auto-negotiation: Yes
        Advertised FEC modes: Not reported
        Speed: 1000Mb/s
        Duplex: Full
        Port: Twisted Pair
        PHYAD: 0
        Transceiver: internal
        Auto-negotiation: on
        MDI-X: off (auto)
        Supports Wake-on: umbg
        Wake-on: d
        Current message level: 0x00000007 (7)
                               drv probe link
        Link detected: yes

三、配置网卡
1、配置物理网卡enp0s9

[root@node201 network-scripts]# cat ifcfg-enp0s9
TYPE=Ethernet
BOOTPROTO=none
NAME=enp0s9
DEVICE=enp0s9
ONBOOT=yes
MASTER=bond0
SLAVE=yes

2、配置物理网卡enp0s10

[root@node201 network-scripts]# cat ifcfg-enp0s10
TYPE=Ethernet
BOOTPROTO=none
NAME=enp0s10
DEVICE=enp0s10
ONBOOT=yes
MASTER=bond0
SLAVE=yes

3、配置bond网卡

[root@node201 network-scripts]# cat ifcfg-bond0
DEVICE=bond0
NAME='System bond0'
TYPE=Ethernet
NM_CONTROLLED=no
USERCTL=no
ONBOOT=yes
BOOTPROTO=none
IPADDR=192.168.10.100
NETMASK=255.255.255.0
BONDING_OPTS='mode=1 miimon=100'
IPV6INIT=no

# miimon=100
# 每100毫秒 (即0.1秒) 监测一次路连接状态,如果有一条线路不通就转入另一条线路; 
Linux的多网卡绑定功能使用的是内核中的"bonding"模块,如果修改为其它模式,
只需要在BONDING_OPTS中指定mode=Number即可。

4、配置内核加载

[root@node201 network-scripts]# echo 'alias bond0 bonding' >> /etc/modprobe.d/dist.conf
[root@node201 network-scripts]# echo 'options bonding mode=1 miimon=100 fail_over_mac=1' >> /etc/modprobe.d/dist.conf
[root@node201 network-scripts]# echo 'ifenslave bond0 enp0s9 enp0s10' >>/etc/rc.local

[root@node201 network-scripts]# cat /etc/modprobe.d/dist.conf
alias bond0 bonding
options bonding mode=1 miimon=100 fail_over_mac=1

# 内核文档中有说明:bond0获取mac地址有两种方式,一种是从第一个活跃网卡中获取mac地址,
然后其余的SLAVE网卡的mac地址都使用该mac地址;另一种是使用fail_over_mac参数,
是bond0使用当前活跃网卡的mac地址,mac地址或者活跃网卡的转换而变。

四、激活bond0网卡

1、重启网络服务network
如下所示,network服务启动异常:

[root@node201 network-scripts]# systemctl restart network.service
[root@node201 network-scripts]# systemctl status network.service
● network.service - LSB: Bring up/down networking
   Loaded: loaded (/etc/rc.d/init.d/network; bad; vendor preset: disabled)
   Active: failed (Result: exit-code) since Tue 2024-10-15 14:00:02 CST; 1min 11s ago
     Docs: man:systemd-sysv-generator(8)
  Process: 24746 ExecStop=/etc/rc.d/init.d/network stop (code=exited, status=0/SUCCESS)
  Process: 25044 ExecStart=/etc/rc.d/init.d/network start (code=exited, status=1/FAILURE)
    Tasks: 0

Oct 15 14:00:01 node201 network[25044]: Bringing up interface enp0s3:  Connection successfully activate...on/6)
Oct 15 14:00:01 node201 network[25044]: [  OK  ]
Oct 15 14:00:02 node201 network[25044]: Bringing up interface enp0s8:  Connection successfully activate...on/7)
Oct 15 14:00:02 node201 network[25044]: [  OK  ]
Oct 15 14:00:02 node201 network[25044]: Bringing up interface enp0s9:  Error: Connection activation fai...ation
Oct 15 14:00:02 node201 network[25044]: [FAILED]
Oct 15 14:00:02 node201 systemd[1]: network.service: control process exited, code=exited status=1
Oct 15 14:00:02 node201 systemd[1]: Failed to start LSB: Bring up/down networking.
Oct 15 14:00:02 node201 systemd[1]: Unit network.service entered failed state.
Oct 15 14:00:02 node201 systemd[1]: network.service failed.
Hint: Some lines were ellipsized, use -l to show in full.

2、手工激活bond0网卡
如下所示,激活bond0网卡异常:

[root@node201 network-scripts]# ifdown bond0;ifup bond0
Error: Connection activation failed: Master device 'enp0s10' can't be activated: Device unmanaged or not available for activation
WARN      : [/etc/sysconfig/network-scripts/ifup-eth] Unable to start slave device ifcfg-enp0s10 for master bond0.
Error: Connection activation failed: Master device 'enp0s9' can't be activated: Device unmanaged or not available for activation
WARN      : [/etc/sysconfig/network-scripts/ifup-eth] Unable to start slave device ifcfg-enp0s9 for master bond0.

# 激活物理网卡失败
[root@node201 network-scripts]# ifup enp0s9
Error: Connection activation failed: Master device 'enp0s9' can't be activated: Device unmanaged or not available for activation

3、查看bond0绑定信息
如下所示,未显示bond0的物理网卡(slave)的信息:

[root@node201 network-scripts]#  modprobe bonding
[root@node201 network-scripts]#  cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: None
MII Status: down
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

4、停止networkmanager服务

[root@node201 network-scripts]# service NetworkManager stop
Redirecting to /bin/systemctl stop NetworkManager.service

5、激活bond0网卡
如下所示,在停止networkmanager服务后,可以正常激活bond0及物理网卡,network网络服务启动正常。

[root@node201 network-scripts]# ifup enp0s9
[root@node201 network-scripts]# ifup enp0s10
[root@node201 network-scripts]# ifup bond0

# 启动network服务
[root@node201 network-scripts]# systemctl restart network
[root@node201 network-scripts]# systemctl status network
● network.service - LSB: Bring up/down networking
   Loaded: loaded (/etc/rc.d/init.d/network; bad; vendor preset: disabled)
   Active: active (running) since Tue 2024-10-15 14:22:07 CST; 6s ago
     Docs: man:systemd-sysv-generator(8)
  Process: 38194 ExecStart=/etc/rc.d/init.d/network start (code=exited, status=0/SUCCESS)
    Tasks: 1
   CGroup: /system.slice/network.service
           └─38604 /sbin/dhclient -1 -q -lf /var/lib/dhclient/dhclient--enp0s8.lease -pf /var/run/dhclient-e...

Oct 15 14:22:07 node201 network[38194]: RTNETLINK answers: File exists
Oct 15 14:22:07 node201 network[38194]: RTNETLINK answers: File exists
Oct 15 14:22:07 node201 network[38194]: RTNETLINK answers: File exists
Oct 15 14:22:07 node201 network[38194]: RTNETLINK answers: File exists
Oct 15 14:22:07 node201 network[38194]: RTNETLINK answers: File exists
Oct 15 14:22:07 node201 network[38194]: RTNETLINK answers: File exists
Oct 15 14:22:07 node201 network[38194]: RTNETLINK answers: File exists
Oct 15 14:22:07 node201 network[38194]: RTNETLINK answers: File exists
Oct 15 14:22:07 node201 network[38194]: RTNETLINK answers: File exists
Oct 15 14:22:07 node201 systemd[1]: Started LSB: Bring up/down networking.

# 查看bond0网卡绑定信息
[root@node201 network-scripts]#  cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: enp0s9
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: enp0s9
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:34:0a:8f
Slave queue ID: 0

Slave Interface: enp0s10
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:58:bd:ac
Slave queue ID: 0

6、禁止networkmanager服务

[root@node201 network-scripts]# systemctl stop NetworkManager
[root@node201 network-scripts]# systemctl disable NetworkManager

7、查看网络信息
如下所示,bond0网卡上获取到ip,并且物理网卡和bond0网卡具有相同的mac地址:

[root@node201 network-scripts]# ip add sh

4: enp0s9: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
    link/ether 08:00:27:34:0a:8f brd ff:ff:ff:ff:ff:ff
5: enp0s10: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
    link/ether 08:00:27:34:0a:8f brd ff:ff:ff:ff:ff:ff
8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 08:00:27:34:0a:8f brd ff:ff:ff:ff:ff:ff
    inet 192.168.10.100/24 brd 192.168.10.255 scope global bond0

如下所示,物理网卡和bond网卡显示相同的mac地址:

五、测试网络连通

1、重启系统,应用bond生效。
2、ping对方主机bond0网卡ip

[root@node201 network-scripts]# ping 192.168.10.101
PING 192.168.10.101 (192.168.10.101) 56(84) bytes of data.
64 bytes from 192.168.10.101: icmp_seq=1 ttl=64 time=222 ms
64 bytes from 192.168.10.101: icmp_seq=2 ttl=64 time=0.402 ms
64 bytes from 192.168.10.101: icmp_seq=3 ttl=64 time=0.378 ms
64 bytes from 192.168.10.101: icmp_seq=4 ttl=64 time=0.420 ms
64 bytes from 192.168.10.101: icmp_seq=5 ttl=64 time=0.474 ms

3、模拟物理网卡down (本地或对方节点)
无论是本地或对方节点,bond中的任意一个物理网卡down,都不影响bond的正常通讯。
[root@node201 ~]# ifdown enp0s9

八、总结
在CentOS下配置bond网卡绑定,注意关闭NetworkManager服务,否则无法激活物理网卡和bond。

标签:bond0,CentOS,运维,node201,网卡,ff,root,network
From: https://www.cnblogs.com/tiany1224/p/18469558

相关文章

  • CentOS清理、巡检脚本
    清理脚本#!/bin/bash#清理缓存脚本echo"同步磁盘数据到内存"sync#清理页面缓存、目录项缓存和inode缓存echo"清理缓存..."sudosh-c"echo3>/proc/sys/vm/drop_caches"echo"清理完成"nacos清理脚本#!\bin\bash###用于定时清除nacos当天之前的日志文件###......
  • centos离线安装docker,docker-compose
    安装环境操作系统:Centos7.99内核版本:3.10.0-1160.el7.x86_64安装用户:rootdocker离线安装1下载压缩包官网下载地址:https://download.docker.com/linux/static/stable/x86_64/这里默认选择最新版本(26.0.2)。2上传压缩包并解压tarzxvfdocker-26.0.2.tgz3......
  • 机器学习—— 机器学习运维(MLOps)
    机器学习——机器学习运维(MLOps)机器学习运维(MLOps)——提高模型管理和部署效率的必备技能什么是MLOps?为什么MLOps很重要?MLOps示例:构建一个简单的ML流水线MLOps的关键工具总结机器学习运维(MLOps)——高效管理和部署AI模型的工具MLOps的优势MLOps实践的关键工具示例代码......
  • 百词斩CTO:核心学习记录库上云,存储空间节省80%,运维效率提升|OceanBase DB大咖说 (十四)
    OceanBase《DB大咖说》第14期,我们邀请到了百词斩的首席技术官敬宓作为嘉宾。百词斩是一款专为英语学习设计的“图背单词”应用,满足不同年龄段和英语水平的用户需求,旨在让单词记忆变得有趣。敬宓是一位资深的技术专家,曾在百度、迅雷等公司任职,对分布式架构、数据库等领域......
  • Docker镜像仓库关闭:运维的无奈与吐槽
    近期,国内外多个Docker镜像仓库陆续发布停止服务的公告,这对于广大依赖Docker进行开发、部署的运维人员来说,无疑是一场突如其来的噩梦。原本顺畅的镜像拉取流程,如今却变得异常艰难,让人不禁要问:这究竟是怎么了?事情的起因似乎可以追溯到一段时间前,中国科学技术大学(中科大)的Docker......
  • 运维技巧(9):删除和恢复已删除的邮箱
    运维技巧(9):删除和恢复已删除的邮箱恢复的邮箱已经不再是之前原用户邮箱。因为在删除邮箱时,原有AD用户已经被删除,即使新建同名用户再将之前的邮箱恢复给新用户,那也是不同的用户。最佳实践:建议使用禁用邮箱的功能。能否恢复已经删除掉的邮箱,取决于组织对已删除邮箱的保留期的设置,默......
  • 运维技巧(4):管理邮箱收发附件限制(精华)
    运维技巧(4):管理邮箱收发附件限制(精华)进行收发邮件大小的限制是很有必要的,因为邮件服务器不能当作文件服务器来使用,不符合最佳实践的要求,也不合理。太大的附件可以通过网盘或者大附件共享的方式进行发送。exchange使用的是ESE的数据库,在不进行脱机整理的情况下,很难自动减小空间,如......
  • centos 替换yum源
    要替换CentOS系统的yum源,您可以按照以下步骤操作:备份原有的yum源配置文件:sudomv/etc/yum.repos.d/CentOS-Base.repo/etc/yum.repos.d/CentOS-Base.repo.backup下载新的yum源配置文件。这里以阿里云的镜像源为例:对于CentOS7:sudocurl-o/etc/yum.repos.d/CentOS-Bas......
  • 收下这份Docker命令备忘录,Linux运维早下班!
    本文给大家分享Docker的命令备忘录,希望对做运维的小伙伴有所帮助!1.Docker简介Docker是一个开源平台,旨在帮助开发人员和运维人员通过容器技术加速应用的开发、测试和部署。它提供了轻量级的隔离环境,使应用程序及其依赖能够快速打包并运行在任何环境中。Docker的核心组......
  • 运维工程师的出路在哪里,尤其是 35 岁以后?零基础入门到精通,收藏这篇就够了
    很多人都在提35岁职场危机,但我想分享的是,对于运维人员来说,35岁以后仍然有很多出路和发展机会。结合目前市场发展情况,35+的运维出路真的还是有几大方向选择的。第一个是,云原生和DevOps:随着云计算和云原生技术的普及,运维人员可以转向云原生和DevOps领域。这些领域注重自动......