首页 > 系统相关 >Linux命令-mdadm管理软磁盘阵列

Linux命令-mdadm管理软磁盘阵列

时间:2023-12-30 17:32:06浏览次数:37  
标签:root Devices dev Linux active mdadm localhost 磁盘阵列

一、命令概要

1.1 命令介绍

mdadm(multiple devices admin)命令的主要功能是用来管理RAID磁盘阵列组,linux系统中实现软RAID设备的命令,可以进行创建,调整,监控,删除等全套管理操作

1.2 语法格式

mdadm命令的格式是:mdadm 【模式】【磁盘阵列】【参数】设备名

SYNOPSIS

       mdadm [mode] <raiddevice> [options] <component-devices>

1.3 基本参数

参数

作用

-A

激活预先配置的磁盘阵列

-B

创建一个没有超级块的RAID设备

-C

创建一个新的阵列组

-F

选择监控模式

-G

改变RAID设备的大小或形态

-s

扫描配置文件或/proc/mdstat得到阵列缺失部分

-D

打印磁盘阵列设备的详细信息

-f

将设备状态定为故障

-a

添加新的设备到阵列组

-r

移除设备

-l

设定磁盘阵列的级别

-n

指定阵列成员(分区/磁盘)的数量

-x

指定阵列中备用盘的数量

-c

设定阵列的块大小,单位为kb

-S

停止阵列组

二、实践环境

平台:VMware Workstation 17 Pro

虚机操作系统:RockyLinux 8.9

三、参考实例

3.1 配置RAID 0磁盘阵列

初始状态下,虚机只有一块scsi系统盘,再添加两块scsi硬盘,容量均为10G

Linux命令-mdadm管理软磁盘阵列_Linux

在不重启系统的情况下,重新扫描磁盘

[root@localhost ~]# ls -d /sys/class/scsi_host/host*
/sys/class/scsi_host/host0  /sys/class/scsi_host/host1  /sys/class/scsi_host/host2
[root@localhost ~]# echo "- - -" > /sys/class/scsi_host/host0/scan
[root@localhost ~]# echo "- - -" > /sys/class/scsi_host/host1/scan
[root@localhost ~]# echo "- - -" > /sys/class/scsi_host/host2/scan
[root@localhost ~]# ls /sys/class/scsi_device/
1:0:0:0  2:0:0:0  2:0:1:0  2:0:2:0  2:0:3:0  2:0:4:0
[root@localhost ~]# echo 1 > /sys/class/scsi_device/1\:0\:0\:0/device/rescan 
[root@localhost ~]# echo 1 > /sys/class/scsi_device/2\:0\:0\:0/device/rescan 
[root@localhost ~]# echo 1 > /sys/class/scsi_device/2\:0\:1\:0/device/rescan 
[root@localhost ~]# echo 1 > /sys/class/scsi_device/2\:0\:2\:0/device/rescan 
[root@localhost ~]# echo 1 > /sys/class/scsi_device/2\:0\:3\:0/device/rescan 
[root@localhost ~]# echo 1 > /sys/class/scsi_device/2\:0\:4\:0/device/rescan 
[root@localhost ~]# lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda           8:0    0   50G  0 disk 
├─sda1        8:1    0    1G  0 part /boot
└─sda2        8:2    0   49G  0 part 
  ├─rl-root 253:0    0 45.1G  0 lvm  /
  └─rl-swap 253:1    0  3.9G  0 lvm  [SWAP]
sdb           8:16   0   10G  0 disk 			# 识别到两块新的硬盘
sdc           8:32   0   10G  0 disk 			# 分别为/dev/sdb /dev/sdc
sr0          11:0    1 1024M  0 rom

mdadm命令系统已默认安装

[root@localhost ~]# yum -y install mdadm
Last metadata expiration check: 1:39:48 ago on Sat 30 Dec 2023 11:52:49 AM CST.
Package mdadm-4.2-8.el8.x86_64 is already installed.
Dependencies resolved.
Nothing to do.
Complete!

使用两块硬盘创建raid 0磁盘阵列

[root@localhost ~]# mdadm -Cv /dev/md0 -l 0 -n 2 /dev/sd[b-c]
mdadm: chunk size defaults to 512K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

查看阵列状态

[root@localhost ~]# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Sat Dec 30 13:34:00 2023
        Raid Level : raid0
        Array Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Sat Dec 30 13:34:00 2023
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

            Layout : -unknown-
        Chunk Size : 512K

Consistency Policy : none

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : 4f98ca1a:36f5c57a:77cc72bf:4d8454e8
            Events : 0

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc

创建文件系统

[root@localhost ~]# mkfs.xfs /dev/md0
log stripe unit (524288 bytes) is too large (maximum is 256KiB)
log stripe unit adjusted to 32KiB
meta-data=/dev/md0               isize=512    agcount=16, agsize=327296 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=0 inobtcount=0
data     =                       bsize=4096   blocks=5236736, imaxpct=25
         =                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

挂载使用

[root@localhost ~]# mkdir /mnt/raid0
[root@localhost ~]# mount /dev/md0 /mnt/raid0
[root@localhost ~]# df -Th /dev/md0
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/md0       xfs    20G  176M   20G   1% /mnt/raid0
[root@localhost ~]# echo 'raid0' > /mnt/raid0/file.txt
[root@localhost ~]# cat /mnt/raid0/file.txt
raid0

尝试定义其中一块磁盘为故障状态

[root@localhost ~]# mdadm -f /dev/md0 /dev/sdc
mdadm: Cannot remove /dev/sdc from /dev/md0, array will be failed.	# 操作失败,提示不能从/dev/md0阵列移除/dev/sdc磁盘

3.2 配置RAID 1磁盘阵列

再次添加两块scsi硬盘

Linux命令-mdadm管理软磁盘阵列_Linux_02

重新扫描磁盘,我们可以将3.1节执行的扫描磁盘的命令写成一个可执行文件,然后赋予执行权限,执行

[root@localhost ~]# vim rescandisk.sh
echo "- - -" > /sys/class/scsi_host/host0/scan
echo "- - -" > /sys/class/scsi_host/host1/scan
echo "- - -" > /sys/class/scsi_host/host2/scan
echo 1 > /sys/class/scsi_device/1\:0\:0\:0/device/rescan
echo 1 > /sys/class/scsi_device/2\:0\:0\:0/device/rescan
echo 1 > /sys/class/scsi_device/2\:0\:1\:0/device/rescan
echo 1 > /sys/class/scsi_device/2\:0\:2\:0/device/rescan
echo 1 > /sys/class/scsi_device/2\:0\:3\:0/device/rescan
echo 1 > /sys/class/scsi_device/2\:0\:4\:0/device/rescan
[root@localhost ~]# chmod +x rescandisk.sh 
[root@localhost ~]# ./rescandisk.sh 
[root@localhost ~]# lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda           8:0    0   50G  0 disk  
├─sda1        8:1    0    1G  0 part  /boot
└─sda2        8:2    0   49G  0 part  
  ├─rl-root 253:0    0 45.1G  0 lvm   /
  └─rl-swap 253:1    0  3.9G  0 lvm   [SWAP]
sdb           8:16   0   10G  0 disk  
└─md0         9:0    0   20G  0 raid0 /mnt/raid0
sdc           8:32   0   10G  0 disk  
└─md0         9:0    0   20G  0 raid0 /mnt/raid0
sdd           8:48   0   10G  0 disk  
sde           8:64   0   10G  0 disk  
sr0          11:0    1 1024M  0 rom

使用两块新增的硬盘创建raid 1磁盘阵列

[root@localhost ~]# mdadm -Cv /dev/md1 -l 1 -n 2 /dev/sd[d-e]
mdadm: size set to 10476544K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.

查看块设备和阵列状态

[root@localhost ~]# lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda           8:0    0   50G  0 disk  
├─sda1        8:1    0    1G  0 part  /boot
└─sda2        8:2    0   49G  0 part  
  ├─rl-root 253:0    0 45.1G  0 lvm   /
  └─rl-swap 253:1    0  3.9G  0 lvm   [SWAP]
sdb           8:16   0   10G  0 disk  
└─md0         9:0    0   20G  0 raid0 /mnt/raid0
sdc           8:32   0   10G  0 disk  
└─md0         9:0    0   20G  0 raid0 /mnt/raid0
sdd           8:48   0   10G  0 disk  
└─md1         9:1    0   10G  0 raid1 
sde           8:64   0   10G  0 disk  
└─md1         9:1    0   10G  0 raid1 
sr0          11:0    1 1024M  0 rom   
[root@localhost ~]# mdadm -D /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Sat Dec 30 14:03:18 2023
        Raid Level : raid1
        Array Size : 10476544 (9.99 GiB 10.73 GB)
     Used Dev Size : 10476544 (9.99 GiB 10.73 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Sat Dec 30 14:04:10 2023
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : localhost.localdomain:1  (local to host localhost.localdomain)
              UUID : 10b32b36:725ad4df:247b7ddc:9c246734
            Events : 17

    Number   Major   Minor   RaidDevice State
       0       8       48        0      active sync   /dev/sdd
       1       8       64        1      active sync   /dev/sde

创建文件系统

[root@localhost ~]# mkfs.xfs /dev/md1
meta-data=/dev/md1               isize=512    agcount=4, agsize=654784 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=0 inobtcount=0
data     =                       bsize=4096   blocks=2619136, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

挂载使用

[root@localhost ~]# mkdir /mnt/raid1
[root@localhost ~]# mount /dev/md1 /mnt/raid1
[root@localhost ~]# echo 'raid1' > /mnt/raid1/file.txt
[root@localhost ~]# cat /mnt/raid1/file.txt
raid1

模拟磁盘出现故障的情况,将其中一块磁盘定义为故障状态

[root@localhost ~]# mdadm -f /dev/md1 /dev/sde
mdadm: set /dev/sde faulty in /dev/md1

查看阵列状态

[root@localhost ~]# mdadm -D /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Sat Dec 30 14:03:18 2023
        Raid Level : raid1
        Array Size : 10476544 (9.99 GiB 10.73 GB)
     Used Dev Size : 10476544 (9.99 GiB 10.73 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Sat Dec 30 14:24:37 2023
             State : clean, degraded 	# 阵列已降级
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 1
     Spare Devices : 0

Consistency Policy : resync

              Name : localhost.localdomain:1  (local to host localhost.localdomain)
              UUID : 10b32b36:725ad4df:247b7ddc:9c246734
            Events : 19

    Number   Major   Minor   RaidDevice State
       0       8       48        0      active sync   /dev/sdd
       -       0        0        1      removed		# 显示有一块磁盘被移除

       1       8       64        -      faulty   /dev/sde	# /dev/sde磁盘故障

将“故障”硬盘从阵列移出

[root@localhost ~]# mdadm -r /dev/md1 /dev/sde
mdadm: hot removed /dev/sde from /dev/md1
[root@localhost ~]# mdadm -D /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Sat Dec 30 14:03:18 2023
        Raid Level : raid1
        Array Size : 10476544 (9.99 GiB 10.73 GB)
     Used Dev Size : 10476544 (9.99 GiB 10.73 GB)
      Raid Devices : 2
     Total Devices : 1
       Persistence : Superblock is persistent

       Update Time : Sat Dec 30 14:27:42 2023
             State : clean, degraded 
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : localhost.localdomain:1  (local to host localhost.localdomain)
              UUID : 10b32b36:725ad4df:247b7ddc:9c246734
            Events : 20

    Number   Major   Minor   RaidDevice State
       0       8       48        0      active sync   /dev/sdd
       -       0        0        1      removed

模拟恢复故障的情况,假如已将故障硬盘从服务器拔出,在相同的槽位替换了新的硬盘,再尝试将新的硬盘加入阵列。

因为/dev/sde硬盘已经加入过磁盘阵列,我们可以用dd命令将磁盘头部数据擦除,以更好的模拟一块新增的磁盘设备

[root@localhost ~]# dd if=/dev/zero of=/dev/sde bs=1M count=100 status=progress	# 擦除100M空间
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.258641 s, 405 MB/s

将硬盘加入磁盘阵列

[root@localhost ~]# mdadm /dev/md1 -a /dev/sde
mdadm: added /dev/sde
[root@localhost ~]# mdadm -D /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Sat Dec 30 14:03:18 2023
        Raid Level : raid1
        Array Size : 10476544 (9.99 GiB 10.73 GB)
     Used Dev Size : 10476544 (9.99 GiB 10.73 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Sat Dec 30 14:38:09 2023
             State : clean, degraded, recovering 	# 恢复中
    Active Devices : 1
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 1

Consistency Policy : resync

    Rebuild Status : 11% complete	# 重构进度:11%

              Name : localhost.localdomain:1  (local to host localhost.localdomain)
              UUID : 10b32b36:725ad4df:247b7ddc:9c246734
            Events : 23

    Number   Major   Minor   RaidDevice State
       0       8       48        0      active sync   /dev/sdd
       2       8       64        1      spare rebuilding   /dev/sde	# 重构中

等待数据重构完成,再次查看阵列状态

[root@localhost ~]# mdadm -D /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Sat Dec 30 14:03:18 2023
        Raid Level : raid1
        Array Size : 10476544 (9.99 GiB 10.73 GB)
     Used Dev Size : 10476544 (9.99 GiB 10.73 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Sat Dec 30 14:38:58 2023
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : localhost.localdomain:1  (local to host localhost.localdomain)
              UUID : 10b32b36:725ad4df:247b7ddc:9c246734
            Events : 39

    Number   Major   Minor   RaidDevice State
       0       8       48        0      active sync   /dev/sdd
       2       8       64        1      active sync   /dev/sde	# /dev/sde设备恢复active状态

尝试给raid 1磁盘阵列加入一块热备盘

Linux命令-mdadm管理软磁盘阵列_Linux_03

[root@localhost ~]# ls /sys/class/scsi_device/
1:0:0:0  2:0:0:0  2:0:1:0  2:0:2:0  2:0:3:0  2:0:4:0  2:0:5:0	# 发现新增了一个目录
[root@localhost ~]# vim ./rescandisk.sh
echo "- - -" > /sys/class/scsi_host/host0/scan
echo "- - -" > /sys/class/scsi_host/host1/scan
echo "- - -" > /sys/class/scsi_host/host2/scan
echo 1 > /sys/class/scsi_device/1\:0\:0\:0/device/rescan
echo 1 > /sys/class/scsi_device/2\:0\:0\:0/device/rescan
echo 1 > /sys/class/scsi_device/2\:0\:1\:0/device/rescan
echo 1 > /sys/class/scsi_device/2\:0\:2\:0/device/rescan
echo 1 > /sys/class/scsi_device/2\:0\:3\:0/device/rescan
echo 1 > /sys/class/scsi_device/2\:0\:4\:0/device/rescan
echo 1 > /sys/class/scsi_device/2\:0\:5\:0/device/rescan	# 添加这一行
[root@localhost ~]# ./rescandisk.sh 
[root@localhost ~]# lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda           8:0    0   50G  0 disk  
├─sda1        8:1    0    1G  0 part  /boot
└─sda2        8:2    0   49G  0 part  
  ├─rl-root 253:0    0 45.1G  0 lvm   /
  └─rl-swap 253:1    0  3.9G  0 lvm   [SWAP]
sdb           8:16   0   10G  0 disk  
└─md0         9:0    0   20G  0 raid0 /mnt/raid0
sdc           8:32   0   10G  0 disk  
└─md0         9:0    0   20G  0 raid0 /mnt/raid0
sdd           8:48   0   10G  0 disk  
└─md1         9:1    0   10G  0 raid1 /mnt/raid1
sde           8:64   0   10G  0 disk  
└─md1         9:1    0   10G  0 raid1 /mnt/raid1
sdf           8:80   0   10G  0 disk  
sr0          11:0    1 1024M  0 rom
[root@localhost ~]# mdadm /dev/md1 -a /dev/sdf
mdadm: added /dev/sdf
[root@localhost ~]# mdadm -D /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Sat Dec 30 14:03:18 2023
        Raid Level : raid1
        Array Size : 10476544 (9.99 GiB 10.73 GB)
     Used Dev Size : 10476544 (9.99 GiB 10.73 GB)
      Raid Devices : 2
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Sat Dec 30 14:51:33 2023
             State : clean 
    Active Devices : 2
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 1

Consistency Policy : resync

              Name : localhost.localdomain:1  (local to host localhost.localdomain)
              UUID : 10b32b36:725ad4df:247b7ddc:9c246734
            Events : 40

    Number   Major   Minor   RaidDevice State
       0       8       48        0      active sync   /dev/sdd
       2       8       64        1      active sync   /dev/sde

       3       8       80        -      spare   /dev/sdf	# /dev/sdf硬盘为spare(备用)状态

3.3 配置RAID 5磁盘阵列

再次给虚机添加4块scsi硬盘,raid 5至少需要3块硬盘,另外一块硬盘配置为热备盘

Linux命令-mdadm管理软磁盘阵列_RAID_04

重新扫描磁盘

[root@localhost ~]# ./rescandisk.sh 
[root@localhost ~]# lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda           8:0    0   50G  0 disk  
├─sda1        8:1    0    1G  0 part  /boot
└─sda2        8:2    0   49G  0 part  
  ├─rl-root 253:0    0 45.1G  0 lvm   /
  └─rl-swap 253:1    0  3.9G  0 lvm   [SWAP]
sdb           8:16   0   10G  0 disk  
└─md0         9:0    0   20G  0 raid0 /mnt/raid0
sdc           8:32   0   10G  0 disk  
└─md0         9:0    0   20G  0 raid0 /mnt/raid0
sdd           8:48   0   10G  0 disk  
└─md1         9:1    0   10G  0 raid1 /mnt/raid1
sde           8:64   0   10G  0 disk  
└─md1         9:1    0   10G  0 raid1 /mnt/raid1
sdf           8:80   0   10G  0 disk  
└─md1         9:1    0   10G  0 raid1 /mnt/raid1
sdg           8:96   0   10G  0 disk  
sdh           8:112  0   10G  0 disk  
sdi           8:128  0   10G  0 disk  
sdj           8:144  0   10G  0 disk  
sr0          11:0    1 1024M  0 rom

创建阵列

[root@localhost ~]# mdadm -Cv /dev/md5 -l 5 -n 3 -x 1 /dev/sd[g-j]
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: size set to 10476544K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md5 started.

查看块设备和阵列状态

[root@localhost ~]# lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda           8:0    0   50G  0 disk  
├─sda1        8:1    0    1G  0 part  /boot
└─sda2        8:2    0   49G  0 part  
  ├─rl-root 253:0    0 45.1G  0 lvm   /
  └─rl-swap 253:1    0  3.9G  0 lvm   [SWAP]
sdb           8:16   0   10G  0 disk  
└─md0         9:0    0   20G  0 raid0 /mnt/raid0
sdc           8:32   0   10G  0 disk  
└─md0         9:0    0   20G  0 raid0 /mnt/raid0
sdd           8:48   0   10G  0 disk  
└─md1         9:1    0   10G  0 raid1 /mnt/raid1
sde           8:64   0   10G  0 disk  
└─md1         9:1    0   10G  0 raid1 /mnt/raid1
sdf           8:80   0   10G  0 disk  
└─md1         9:1    0   10G  0 raid1 /mnt/raid1
sdg           8:96   0   10G  0 disk  
└─md5         9:5    0   20G  0 raid5 
sdh           8:112  0   10G  0 disk  
└─md5         9:5    0   20G  0 raid5 
sdi           8:128  0   10G  0 disk  
└─md5         9:5    0   20G  0 raid5 
sdj           8:144  0   10G  0 disk  
└─md5         9:5    0   20G  0 raid5 
sr0          11:0    1 1024M  0 rom   
[root@localhost ~]# mdadm -D /dev/md5
/dev/md5:
           Version : 1.2
     Creation Time : Sat Dec 30 15:10:04 2023
        Raid Level : raid5
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 10476544 (9.99 GiB 10.73 GB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Sat Dec 30 15:10:57 2023
             State : clean 
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:5  (local to host localhost.localdomain)
              UUID : 369b2f32:a27a7d8a:275d4b95:7911adfc
            Events : 18

    Number   Major   Minor   RaidDevice State
       0       8       96        0      active sync   /dev/sdg
       1       8      112        1      active sync   /dev/sdh
       4       8      128        2      active sync   /dev/sdi

       3       8      144        -      spare   /dev/sdj	# /dev/sdj磁盘为spare(备用)状态

创建文件系统

[root@localhost ~]# mkfs.xfs /dev/md5
log stripe unit (524288 bytes) is too large (maximum is 256KiB)
log stripe unit adjusted to 32KiB
meta-data=/dev/md5               isize=512    agcount=16, agsize=327296 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=0 inobtcount=0
data     =                       bsize=4096   blocks=5236736, imaxpct=25
         =                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

挂在使用

[root@localhost ~]# mkdir /mnt/raid5
[root@localhost ~]# mount /dev/md5 /mnt/raid5
[root@localhost ~]# echo 'raid5' > /mnt/raid5/file.txt
[root@localhost ~]# cat /mnt/raid5/file.txt
raid5

模拟磁盘故障的情况,尝试将其中一块active状态的磁盘定义为故障状态

[root@localhost ~]# mdadm -f /dev/md5 /dev/sdg
mdadm: set /dev/sdg faulty in /dev/md5
[root@localhost ~]# mdadm -D /dev/md5
/dev/md5:
           Version : 1.2
     Creation Time : Sat Dec 30 15:10:04 2023
        Raid Level : raid5
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 10476544 (9.99 GiB 10.73 GB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Sat Dec 30 15:13:37 2023
             State : clean, degraded, recovering # 已降级,恢复中
    Active Devices : 2
   Working Devices : 3
    Failed Devices : 1
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

    Rebuild Status : 15% complete	# 重构进度:15%

              Name : localhost.localdomain:5  (local to host localhost.localdomain)
              UUID : 369b2f32:a27a7d8a:275d4b95:7911adfc
            Events : 22

    Number   Major   Minor   RaidDevice State
       3       8      144        0      spare rebuilding   /dev/sdj	# 备用磁盘/dev/sdj数据重构中
       1       8      112        1      active sync   /dev/sdh
       4       8      128        2      active sync   /dev/sdi

       0       8       96        -      faulty   /dev/sdg

备用磁盘自动接替故障磁盘的工作,进行数据重构,等待重构完成后,再次查看阵列状态

[root@localhost ~]# mdadm -D /dev/md5
/dev/md5:
           Version : 1.2
     Creation Time : Sat Dec 30 15:10:04 2023
        Raid Level : raid5
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 10476544 (9.99 GiB 10.73 GB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Sat Dec 30 15:14:23 2023
             State : clean 
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 1
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:5  (local to host localhost.localdomain)
              UUID : 369b2f32:a27a7d8a:275d4b95:7911adfc
            Events : 37

    Number   Major   Minor   RaidDevice State
       3       8      144        0      active sync   /dev/sdj
       1       8      112        1      active sync   /dev/sdh
       4       8      128        2      active sync   /dev/sdi

       0       8       96        -      faulty   /dev/sdg

现在可以将“故障”磁盘进行移除了

[root@localhost ~]# mdadm -r /dev/md5 /dev/sdg
mdadm: hot removed /dev/sdg from /dev/md5
[root@localhost ~]# mdadm -D /dev/md5
/dev/md5:
           Version : 1.2
     Creation Time : Sat Dec 30 15:10:04 2023
        Raid Level : raid5
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 10476544 (9.99 GiB 10.73 GB)
      Raid Devices : 3
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Sat Dec 30 15:17:28 2023
             State : clean 
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:5  (local to host localhost.localdomain)
              UUID : 369b2f32:a27a7d8a:275d4b95:7911adfc
            Events : 38

    Number   Major   Minor   RaidDevice State
       3       8      144        0      active sync   /dev/sdj
       1       8      112        1      active sync   /dev/sdh
       4       8      128        2      active sync   /dev/sdi

模拟恢复故障的情况,假如已将故障硬盘从服务器拔出,在相同的槽位替换了新的硬盘,再尝试将新的硬盘加入阵列。

因为/dev/sdg硬盘已经加入过磁盘阵列,我们可以用dd命令将磁盘头部数据擦除,以更好的模拟一块新增的磁盘设备

[root@localhost ~]# dd if=/dev/zero of=/dev/sdg bs=1M count=100 status=progress
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.789503 s, 133 MB/s

将硬盘加入磁盘阵列

[root@localhost ~]# mdadm /dev/md5 -a /dev/sdg
mdadm: added /dev/sdg
[root@localhost ~]# mdadm -D /dev/md5
/dev/md5:
           Version : 1.2
     Creation Time : Sat Dec 30 15:10:04 2023
        Raid Level : raid5
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 10476544 (9.99 GiB 10.73 GB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Sat Dec 30 15:20:21 2023
             State : clean 
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:5  (local to host localhost.localdomain)
              UUID : 369b2f32:a27a7d8a:275d4b95:7911adfc
            Events : 39

    Number   Major   Minor   RaidDevice State
       3       8      144        0      active sync   /dev/sdj
       1       8      112        1      active sync   /dev/sdh
       4       8      128        2      active sync   /dev/sdi

       5       8       96        -      spare   /dev/sdg	# 现在,/dev/sdg磁盘成为热备盘

3.4 配置RAID 6磁盘阵列

为节约实验资源,我们将3.1/3.2/3.3节配置的磁盘阵列全部停止

[root@localhost ~]# umount /dev/md0
[root@localhost ~]# umount /dev/md1
[root@localhost ~]# umount /dev/md5
[root@localhost ~]# mdadm -S /dev/md0
mdadm: stopped /dev/md0
[root@localhost ~]# mdadm -S /dev/md1
mdadm: stopped /dev/md1
[root@localhost ~]# mdadm -S /dev/md5
mdadm: stopped /dev/md5
[root@localhost ~]# lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda           8:0    0   50G  0 disk 
├─sda1        8:1    0    1G  0 part /boot
└─sda2        8:2    0   49G  0 part 
  ├─rl-root 253:0    0 45.1G  0 lvm  /
  └─rl-swap 253:1    0  3.9G  0 lvm  [SWAP]
sdb           8:16   0   10G  0 disk 
sdc           8:32   0   10G  0 disk 
sdd           8:48   0   10G  0 disk 
sde           8:64   0   10G  0 disk 
sdf           8:80   0   10G  0 disk 
sdg           8:96   0   10G  0 disk 
sdh           8:112  0   10G  0 disk 
sdi           8:128  0   10G  0 disk 
sdj           8:144  0   10G  0 disk 
sr0          11:0    1 1024M  0 rom

查看块设备id,发现/dev/sdc-sdj硬盘上已经有了UUID、LABEL、TYPE等一系列属性,因为这些磁盘已经配置过磁盘阵列

[root@localhost ~]# blkid
/dev/sda1: UUID="9d58ddae-ddae-4a83-8841-4b9863b55ab5" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="506a5691-01"
/dev/sda2: UUID="LE8Gll-uYvD-F1dG-VLMB-HjIK-BQru-DTbzTv" TYPE="LVM2_member" PARTUUID="506a5691-02"
/dev/mapper/rl-root: UUID="95a7297e-a906-4ed8-af8c-fe61a6ba6028" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/rl-swap: UUID="d6ef884e-f387-434c-9273-ac9efb1a5d73" TYPE="swap"
/dev/sdc: UUID="4f98ca1a-36f5-c57a-77cc-72bf4d8454e8" UUID_SUB="1352f036-759f-f660-aa18-e52775171df0" LABEL="localhost.localdomain:0" TYPE="linux_raid_member"
/dev/sdb: UUID="4f98ca1a-36f5-c57a-77cc-72bf4d8454e8" UUID_SUB="8bd7485e-90e2-ee27-b0ed-a23af93f68aa" LABEL="localhost.localdomain:0" TYPE="linux_raid_member"
/dev/sdd: UUID="10b32b36-725a-d4df-247b-7ddc9c246734" UUID_SUB="13821e98-5e7a-43ab-81bd-bc7109154be1" LABEL="localhost.localdomain:1" TYPE="linux_raid_member"
/dev/sde: UUID="10b32b36-725a-d4df-247b-7ddc9c246734" UUID_SUB="287bb673-5e06-3d72-dbf6-d048c2e85be9" LABEL="localhost.localdomain:1" TYPE="linux_raid_member"
/dev/sdf: UUID="10b32b36-725a-d4df-247b-7ddc9c246734" UUID_SUB="e4ac84c8-02b4-6ffe-78c5-fd549cbed9f9" LABEL="localhost.localdomain:1" TYPE="linux_raid_member"
/dev/sdg: UUID="369b2f32-a27a-7d8a-275d-4b957911adfc" UUID_SUB="3766c9ca-682a-b8b1-b784-541ac430c2fe" LABEL="localhost.localdomain:5" TYPE="linux_raid_member"
/dev/sdh: UUID="369b2f32-a27a-7d8a-275d-4b957911adfc" UUID_SUB="f6fe4f8b-5328-e6dd-458b-6f1fbf896434" LABEL="localhost.localdomain:5" TYPE="linux_raid_member"
/dev/sdi: UUID="369b2f32-a27a-7d8a-275d-4b957911adfc" UUID_SUB="a78a9c80-2746-9263-902d-af9b0b72c7a7" LABEL="localhost.localdomain:5" TYPE="linux_raid_member"
/dev/sdj: UUID="369b2f32-a27a-7d8a-275d-4b957911adfc" UUID_SUB="6142ac51-3590-65db-bcd8-80d59c89dac6" LABEL="localhost.localdomain:5" TYPE="linux_raid_member"

为避免影响后续实验结果,我们可以使用dd命令擦写磁盘头部空间,模拟干净的磁盘

[root@localhost ~]# for i in sd{b..j}; do dd if=/dev/zero of=/dev/$i bs=1M count=100 status=progress; done	# 每一块磁盘擦写100M空间
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.242658 s, 432 MB/s
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.254125 s, 413 MB/s
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.253367 s, 414 MB/s
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.244166 s, 429 MB/s
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.245798 s, 427 MB/s
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.249635 s, 420 MB/s
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.246592 s, 425 MB/s
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.245579 s, 427 MB/s
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.250217 s, 419 MB/s

再次查看块设备id

[root@localhost ~]# blkid
/dev/sda1: UUID="9d58ddae-ddae-4a83-8841-4b9863b55ab5" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="506a5691-01"
/dev/sda2: UUID="LE8Gll-uYvD-F1dG-VLMB-HjIK-BQru-DTbzTv" TYPE="LVM2_member" PARTUUID="506a5691-02"
/dev/mapper/rl-root: UUID="95a7297e-a906-4ed8-af8c-fe61a6ba6028" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/rl-swap: UUID="d6ef884e-f387-434c-9273-ac9efb1a5d73" TYPE="swap"

raid 6至少需要4块硬盘,我们使用5块硬盘来创建,其中一块作为热备盘

[root@localhost ~]# mdadm -Cv /dev/md6 -l 6 -n 4 -x 1 /dev/sd[b-f]
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: size set to 10476544K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md6 started.

查看块设备和阵列状态

[root@localhost ~]# lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda           8:0    0   50G  0 disk  
├─sda1        8:1    0    1G  0 part  /boot
└─sda2        8:2    0   49G  0 part  
  ├─rl-root 253:0    0 45.1G  0 lvm   /
  └─rl-swap 253:1    0  3.9G  0 lvm   [SWAP]
sdb           8:16   0   10G  0 disk  
└─md6         9:6    0   20G  0 raid6 
sdc           8:32   0   10G  0 disk  
└─md6         9:6    0   20G  0 raid6 
sdd           8:48   0   10G  0 disk  
└─md6         9:6    0   20G  0 raid6 
sde           8:64   0   10G  0 disk  
└─md6         9:6    0   20G  0 raid6 
sdf           8:80   0   10G  0 disk  
└─md6         9:6    0   20G  0 raid6 
sdg           8:96   0   10G  0 disk  
sdh           8:112  0   10G  0 disk  
sdi           8:128  0   10G  0 disk  
sdj           8:144  0   10G  0 disk  
sr0          11:0    1 1024M  0 rom   
[root@localhost ~]# mdadm -D /dev/md6
/dev/md6:
           Version : 1.2
     Creation Time : Sat Dec 30 15:39:52 2023
        Raid Level : raid6
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 10476544 (9.99 GiB 10.73 GB)
      Raid Devices : 4
     Total Devices : 5
       Persistence : Superblock is persistent

       Update Time : Sat Dec 30 15:40:32 2023
             State : clean, resyncing # 数据同步中
    Active Devices : 4
   Working Devices : 5
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

     Resync Status : 48% complete	# 同步进度:48%

              Name : localhost.localdomain:6  (local to host localhost.localdomain)
              UUID : 33736369:d647d4d8:11e25960:087e34de
            Events : 7

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       3       8       64        3      active sync   /dev/sde

       4       8       80        -      spare   /dev/sdf	# /dev/sdf磁盘为spare(备用)状态

创建文件系统

[root@localhost ~]# mkfs.xfs /dev/md6
log stripe unit (524288 bytes) is too large (maximum is 256KiB)
log stripe unit adjusted to 32KiB
meta-data=/dev/md6               isize=512    agcount=16, agsize=327296 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=0 inobtcount=0
data     =                       bsize=4096   blocks=5236736, imaxpct=25
         =                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

挂载使用

[root@localhost ~]# mkfs.xfs /dev/md6
mkfs.xfs: /dev/md6 appears to contain an existing filesystem (xfs).
mkfs.xfs: Use the -f option to force overwrite.
[root@localhost ~]# mkfs.xfs /dev/md6 -f
log stripe unit (524288 bytes) is too large (maximum is 256KiB)
log stripe unit adjusted to 32KiB
meta-data=/dev/md6               isize=512    agcount=16, agsize=327296 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=0 inobtcount=0
data     =                       bsize=4096   blocks=5236736, imaxpct=25
         =                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@localhost ~]# mkdir /mnt/raid6
[root@localhost ~]# mount /dev/md6 /mnt/raid6
[root@localhost ~]# echo 'raid6' > /mnt/raid6/file.txt
[root@localhost ~]# cat /mnt/raid6/file.txt
raid6

模拟磁盘故障的情况,尝试将其中一块active状态的磁盘定义为故障状态

[root@localhost ~]# mdadm -f /dev/md6 /dev/sdc
mdadm: set /dev/sdc faulty in /dev/md6
[root@localhost ~]# mdadm -D /dev/md6
/dev/md6:
           Version : 1.2
     Creation Time : Sat Dec 30 15:39:52 2023
        Raid Level : raid6
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 10476544 (9.99 GiB 10.73 GB)
      Raid Devices : 4
     Total Devices : 5
       Persistence : Superblock is persistent

       Update Time : Sat Dec 30 15:46:47 2023
             State : clean, degraded, recovering # 已降级,恢复中
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 1
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

    Rebuild Status : 2% complete	# 重构进度:2%

              Name : localhost.localdomain:6  (local to host localhost.localdomain)
              UUID : 33736369:d647d4d8:11e25960:087e34de
            Events : 19

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       4       8       80        1      spare rebuilding   /dev/sdf	# 备用盘/dev/sdf数据重构中
       2       8       48        2      active sync   /dev/sdd
       3       8       64        3      active sync   /dev/sde

       1       8       32        -      faulty   /dev/sdc	# /dev/sdc磁盘处于故障状态

备用磁盘自动接替故障磁盘的工作,进行数据重构,等待重构完成后,再次查看阵列状态

[root@localhost ~]# mdadm -D /dev/md6
/dev/md6:
           Version : 1.2
     Creation Time : Sat Dec 30 15:39:52 2023
        Raid Level : raid6
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 10476544 (9.99 GiB 10.73 GB)
      Raid Devices : 4
     Total Devices : 5
       Persistence : Superblock is persistent

       Update Time : Sat Dec 30 15:47:59 2023
             State : clean 
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 1
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:6  (local to host localhost.localdomain)
              UUID : 33736369:d647d4d8:11e25960:087e34de
            Events : 36

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       4       8       80        1      active sync   /dev/sdf
       2       8       48        2      active sync   /dev/sdd
       3       8       64        3      active sync   /dev/sde

       1       8       32        -      faulty   /dev/sdc

模拟再次损坏一块磁盘

[root@localhost ~]# mdadm -f /dev/md6 /dev/sdb
mdadm: set /dev/sdb faulty in /dev/md6
[root@localhost ~]# mdadm -D /dev/md6
/dev/md6:
           Version : 1.2
     Creation Time : Sat Dec 30 15:39:52 2023
        Raid Level : raid6
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 10476544 (9.99 GiB 10.73 GB)
      Raid Devices : 4
     Total Devices : 5
       Persistence : Superblock is persistent

       Update Time : Sat Dec 30 15:51:19 2023
             State : clean, degraded # 已降级
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 2
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:6  (local to host localhost.localdomain)
              UUID : 33736369:d647d4d8:11e25960:087e34de
            Events : 38

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       4       8       80        1      active sync   /dev/sdf
       2       8       48        2      active sync   /dev/sdd
       3       8       64        3      active sync   /dev/sde

       0       8       16        -      faulty   /dev/sdb
       1       8       32        -      faulty   /dev/sdc

现在可以将“故障”磁盘进行移除了

[root@localhost ~]# mdadm -r /dev/md6 /dev/sdb
mdadm: hot removed /dev/sdb from /dev/md6
[root@localhost ~]# mdadm -r /dev/md6 /dev/sdc
mdadm: hot removed /dev/sdc from /dev/md6
[root@localhost ~]# mdadm -D /dev/md6
/dev/md6:
           Version : 1.2
     Creation Time : Sat Dec 30 15:39:52 2023
        Raid Level : raid6
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 10476544 (9.99 GiB 10.73 GB)
      Raid Devices : 4
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Sat Dec 30 15:54:22 2023
             State : clean, degraded 
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:6  (local to host localhost.localdomain)
              UUID : 33736369:d647d4d8:11e25960:087e34de
            Events : 40

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       4       8       80        1      active sync   /dev/sdf
       2       8       48        2      active sync   /dev/sdd
       3       8       64        3      active sync   /dev/sde

模拟恢复故障的情况,假如已将故障硬盘从服务器拔出,在相同的槽位替换了新的硬盘,再尝试将新的硬盘加入阵列。

因为/dev/sdb和sdc硬盘已经加入过磁盘阵列,我们可以用dd命令将磁盘头部数据擦除,以更好的模拟一块新增的磁盘设备

[root@localhost ~]# dd if=/dev/zero of=/dev/sdb bs=1M count=100 status=progress	# 擦除100M空间
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.354077 s, 296 MB/s
[root@localhost ~]# dd if=/dev/zero of=/dev/sdc bs=1M count=100 status=progress
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.297151 s, 353 MB/s

将硬盘加入磁盘阵列

[root@localhost ~]# mdadm /dev/md6 -a /dev/sdb
mdadm: added /dev/sdb
[root@localhost ~]# mdadm /dev/md6 -a /dev/sdc
mdadm: added /dev/sdc
[root@localhost ~]# 
[root@localhost ~]# mdadm -D /dev/md6
/dev/md6:
           Version : 1.2
     Creation Time : Sat Dec 30 15:39:52 2023
        Raid Level : raid6
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 10476544 (9.99 GiB 10.73 GB)
      Raid Devices : 4
     Total Devices : 5
       Persistence : Superblock is persistent

       Update Time : Sat Dec 30 15:56:45 2023
             State : clean, degraded, recovering # 已降级,恢复中
    Active Devices : 3
   Working Devices : 5
    Failed Devices : 0
     Spare Devices : 2

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

    Rebuild Status : 8% complete	# 重构进度:8%

              Name : localhost.localdomain:6  (local to host localhost.localdomain)
              UUID : 33736369:d647d4d8:11e25960:087e34de
            Events : 45

    Number   Major   Minor   RaidDevice State
       5       8       16        0      spare rebuilding   /dev/sdb # /dev/sdb磁盘重构中
       4       8       80        1      active sync   /dev/sdf
       2       8       48        2      active sync   /dev/sdd
       3       8       64        3      active sync   /dev/sde

       6       8       32        -      spare   /dev/sdc	# 现在,/dev/sdc磁盘作为热备盘

等待数据重构完成,再次查看阵列状态

[root@localhost ~]# mdadm -D /dev/md6
/dev/md6:
           Version : 1.2
     Creation Time : Sat Dec 30 15:39:52 2023
        Raid Level : raid6
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 10476544 (9.99 GiB 10.73 GB)
      Raid Devices : 4
     Total Devices : 5
       Persistence : Superblock is persistent

       Update Time : Sat Dec 30 15:57:50 2023
             State : clean 
    Active Devices : 4
   Working Devices : 5
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:6  (local to host localhost.localdomain)
              UUID : 33736369:d647d4d8:11e25960:087e34de
            Events : 62

    Number   Major   Minor   RaidDevice State
       5       8       16        0      active sync   /dev/sdb
       4       8       80        1      active sync   /dev/sdf
       2       8       48        2      active sync   /dev/sdd
       3       8       64        3      active sync   /dev/sde

       6       8       32        -      spare   /dev/sdc

3.5 配置RAID 10磁盘阵列

为节约实验资源,我们将3.4节配置的磁盘阵列停止

[root@localhost ~]# umount /dev/md6
[root@localhost ~]# mdadm -S /dev/md6
mdadm: stopped /dev/md6
[root@localhost ~]# lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda           8:0    0   50G  0 disk 
├─sda1        8:1    0    1G  0 part /boot
└─sda2        8:2    0   49G  0 part 
  ├─rl-root 253:0    0 45.1G  0 lvm  /
  └─rl-swap 253:1    0  3.9G  0 lvm  [SWAP]
sdb           8:16   0   10G  0 disk 
sdc           8:32   0   10G  0 disk 
sdd           8:48   0   10G  0 disk 
sde           8:64   0   10G  0 disk 
sdf           8:80   0   10G  0 disk 
sdg           8:96   0   10G  0 disk 
sdh           8:112  0   10G  0 disk 
sdi           8:128  0   10G  0 disk 
sdj           8:144  0   10G  0 disk 
sr0          11:0    1 1024M  0 rom

查看块设备id,发现/dev/sdb-sdf硬盘上已经有了UUID、LABEL、TYPE等一系列属性,因为这些磁盘已经配置过磁盘阵列

[root@localhost ~]# blkid
/dev/sda1: UUID="9d58ddae-ddae-4a83-8841-4b9863b55ab5" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="506a5691-01"
/dev/sda2: UUID="LE8Gll-uYvD-F1dG-VLMB-HjIK-BQru-DTbzTv" TYPE="LVM2_member" PARTUUID="506a5691-02"
/dev/mapper/rl-root: UUID="95a7297e-a906-4ed8-af8c-fe61a6ba6028" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/rl-swap: UUID="d6ef884e-f387-434c-9273-ac9efb1a5d73" TYPE="swap"
/dev/sdc: UUID="33736369-d647-d4d8-11e2-5960087e34de" UUID_SUB="482d5064-2367-d320-9543-3a4b08a9978f" LABEL="localhost.localdomain:6" TYPE="linux_raid_member"
/dev/sdd: UUID="33736369-d647-d4d8-11e2-5960087e34de" UUID_SUB="6981e4e2-78e0-ac7a-42a0-8e5cc091dbaa" LABEL="localhost.localdomain:6" TYPE="linux_raid_member"
/dev/sde: UUID="33736369-d647-d4d8-11e2-5960087e34de" UUID_SUB="d0b1dad6-523c-f116-24be-c7317ca9906b" LABEL="localhost.localdomain:6" TYPE="linux_raid_member"
/dev/sdb: UUID="33736369-d647-d4d8-11e2-5960087e34de" UUID_SUB="6e72e16a-acaf-9c04-e6b3-ff21017ea679" LABEL="localhost.localdomain:6" TYPE="linux_raid_member"
/dev/sdf: UUID="33736369-d647-d4d8-11e2-5960087e34de" UUID_SUB="193cbb8f-56f1-5714-51ee-4f68d053b5a9" LABEL="localhost.localdomain:6" TYPE="linux_raid_member"

为避免影响后续实验结果,我们可以使用dd命令擦写磁盘头部空间,模拟干净的磁盘

[root@localhost ~]# for i in sd{b..f}; do dd if=/dev/zero of=/dev/$i bs=1M count=100 status=progress; done	# 每一块磁盘擦写100M空间
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.267211 s, 392 MB/s
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.272766 s, 384 MB/s
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.262857 s, 399 MB/s
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.285595 s, 367 MB/s
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.285257 s, 368 MB/s

再次查看块设备id

[root@localhost ~]# blkid
/dev/sda1: UUID="9d58ddae-ddae-4a83-8841-4b9863b55ab5" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="506a5691-01"
/dev/sda2: UUID="LE8Gll-uYvD-F1dG-VLMB-HjIK-BQru-DTbzTv" TYPE="LVM2_member" PARTUUID="506a5691-02"
/dev/mapper/rl-root: UUID="95a7297e-a906-4ed8-af8c-fe61a6ba6028" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/rl-swap: UUID="d6ef884e-f387-434c-9273-ac9efb1a5d73" TYPE="swap"

raid 10至少需要4块硬盘,我们用5块硬盘来创建,另外一块硬盘作为热备盘

[root@localhost ~]# mdadm -Cv /dev/md10 -l 10 -n 4 -x 1 /dev/sd[b-f]
mdadm: layout defaults to n2
mdadm: layout defaults to n2
mdadm: chunk size defaults to 512K
mdadm: size set to 10476544K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md10 started.

查看块设备和阵列状态

[root@localhost ~]# lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE   MOUNTPOINT
sda           8:0    0   50G  0 disk   
├─sda1        8:1    0    1G  0 part   /boot
└─sda2        8:2    0   49G  0 part   
  ├─rl-root 253:0    0 45.1G  0 lvm    /
  └─rl-swap 253:1    0  3.9G  0 lvm    [SWAP]
sdb           8:16   0   10G  0 disk   
└─md10        9:10   0   20G  0 raid10 
sdc           8:32   0   10G  0 disk   
└─md10        9:10   0   20G  0 raid10 
sdd           8:48   0   10G  0 disk   
└─md10        9:10   0   20G  0 raid10 
sde           8:64   0   10G  0 disk   
└─md10        9:10   0   20G  0 raid10 
sdf           8:80   0   10G  0 disk   
└─md10        9:10   0   20G  0 raid10 
sdg           8:96   0   10G  0 disk   
sdh           8:112  0   10G  0 disk   
sdi           8:128  0   10G  0 disk   
sdj           8:144  0   10G  0 disk   
sr0          11:0    1 1024M  0 rom    
[root@localhost ~]# mdadm -D /dev/md10
/dev/md10:
           Version : 1.2
     Creation Time : Sat Dec 30 16:48:12 2023
        Raid Level : raid10
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 10476544 (9.99 GiB 10.73 GB)
      Raid Devices : 4
     Total Devices : 5
       Persistence : Superblock is persistent

       Update Time : Sat Dec 30 16:48:44 2023
             State : clean, resyncing # 数据同步中
    Active Devices : 4
   Working Devices : 5
    Failed Devices : 0
     Spare Devices : 1

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : resync

     Resync Status : 37% complete	# 同步进度:37%

              Name : localhost.localdomain:10  (local to host localhost.localdomain)
              UUID : a6031ef1:e352c58e:aa26ef23:4165550a
            Events : 5

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync set-A   /dev/sdb
       1       8       32        1      active sync set-B   /dev/sdc
       2       8       48        2      active sync set-A   /dev/sdd
       3       8       64        3      active sync set-B   /dev/sde

       4       8       80        -      spare   /dev/sdf	# /dev/sdf作为热备盘

等待数据同步完成后,查看阵列状态

[root@localhost ~]# mdadm -D /dev/md10
/dev/md10:
           Version : 1.2
     Creation Time : Sat Dec 30 16:48:12 2023
        Raid Level : raid10
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 10476544 (9.99 GiB 10.73 GB)
      Raid Devices : 4
     Total Devices : 5
       Persistence : Superblock is persistent

       Update Time : Sat Dec 30 16:49:58 2023
             State : clean 
    Active Devices : 4
   Working Devices : 5
    Failed Devices : 0
     Spare Devices : 1

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:10  (local to host localhost.localdomain)
              UUID : a6031ef1:e352c58e:aa26ef23:4165550a
            Events : 17

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync set-A   /dev/sdb
       1       8       32        1      active sync set-B   /dev/sdc
       2       8       48        2      active sync set-A   /dev/sdd
       3       8       64        3      active sync set-B   /dev/sde

       4       8       80        -      spare   /dev/sdf

创建文件系统

[root@localhost ~]# mkfs.xfs /dev/md10
log stripe unit (524288 bytes) is too large (maximum is 256KiB)
log stripe unit adjusted to 32KiB
meta-data=/dev/md10              isize=512    agcount=16, agsize=327296 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=0 inobtcount=0
data     =                       bsize=4096   blocks=5236736, imaxpct=25
         =                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

挂载使用

[root@localhost ~]# mkdir /mnt/raid10
[root@localhost ~]# mount /dev/md10 /mnt/raid10
[root@localhost ~]# echo 'raid10' > /mnt/raid10/file.txt
[root@localhost ~]# cat /mnt/raid10/file.txt
raid10

模拟磁盘故障的情况,尝试将其中一块active状态的磁盘定义为故障状态

[root@localhost ~]# mdadm -f /dev/md10 /dev/sdd
mdadm: set /dev/sdd faulty in /dev/md10
[root@localhost ~]# mdadm -D /dev/md10
/dev/md10:
           Version : 1.2
     Creation Time : Sat Dec 30 16:48:12 2023
        Raid Level : raid10
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 10476544 (9.99 GiB 10.73 GB)
      Raid Devices : 4
     Total Devices : 5
       Persistence : Superblock is persistent

       Update Time : Sat Dec 30 16:55:19 2023
             State : clean, degraded, recovering # 已降级,恢复中
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 1
     Spare Devices : 1

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : resync

    Rebuild Status : 9% complete	# 重构进度:9%

              Name : localhost.localdomain:10  (local to host localhost.localdomain)
              UUID : a6031ef1:e352c58e:aa26ef23:4165550a
            Events : 20

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync set-A   /dev/sdb
       1       8       32        1      active sync set-B   /dev/sdc
       4       8       80        2      spare rebuilding   /dev/sdf # 备用盘/dev/sdf数据重构中
       3       8       64        3      active sync set-B   /dev/sde

       2       8       48        -      faulty   /dev/sdd	# /dev/sdc磁盘处于故障状态

备用磁盘自动接替故障磁盘的工作,进行数据重构,等待重构完成后,再次查看阵列状态

[root@localhost ~]# mdadm -D /dev/md10
/dev/md10:
           Version : 1.2
     Creation Time : Sat Dec 30 16:48:12 2023
        Raid Level : raid10
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 10476544 (9.99 GiB 10.73 GB)
      Raid Devices : 4
     Total Devices : 5
       Persistence : Superblock is persistent

       Update Time : Sat Dec 30 16:56:08 2023
             State : clean 
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 1
     Spare Devices : 0

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:10  (local to host localhost.localdomain)
              UUID : a6031ef1:e352c58e:aa26ef23:4165550a
            Events : 36

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync set-A   /dev/sdb
       1       8       32        1      active sync set-B   /dev/sdc
       4       8       80        2      active sync set-A   /dev/sdf
       3       8       64        3      active sync set-B   /dev/sde

       2       8       48        -      faulty   /dev/sdd

模拟再次损坏一块磁盘

[root@localhost ~]# mdadm -f /dev/md10 /dev/sdb
mdadm: set /dev/sdb faulty in /dev/md10
[root@localhost ~]# mdadm -D /dev/md10
/dev/md10:
           Version : 1.2
     Creation Time : Sat Dec 30 16:48:12 2023
        Raid Level : raid10
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 10476544 (9.99 GiB 10.73 GB)
      Raid Devices : 4
     Total Devices : 5
       Persistence : Superblock is persistent

       Update Time : Sat Dec 30 16:58:00 2023
             State : clean, degraded # 已降级
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 2
     Spare Devices : 0

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:10  (local to host localhost.localdomain)
              UUID : a6031ef1:e352c58e:aa26ef23:4165550a
            Events : 38

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       32        1      active sync set-B   /dev/sdc
       4       8       80        2      active sync set-A   /dev/sdf
       3       8       64        3      active sync set-B   /dev/sde

       0       8       16        -      faulty   /dev/sdb
       2       8       48        -      faulty   /dev/sdd

现在可以将“故障”磁盘进行移除了

[root@localhost ~]# mdadm -r /dev/md10 /dev/sdb
mdadm: hot removed /dev/sdb from /dev/md10
[root@localhost ~]# mdadm -r /dev/md10 /dev/sdd
mdadm: hot removed /dev/sdd from /dev/md10
[root@localhost ~]# mdadm -D /dev/md10
/dev/md10:
           Version : 1.2
     Creation Time : Sat Dec 30 16:48:12 2023
        Raid Level : raid10
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 10476544 (9.99 GiB 10.73 GB)
      Raid Devices : 4
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Sat Dec 30 16:59:09 2023
             State : clean, degraded # 已降级
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 0

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:10  (local to host localhost.localdomain)
              UUID : a6031ef1:e352c58e:aa26ef23:4165550a
            Events : 40

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       32        1      active sync set-B   /dev/sdc
       4       8       80        2      active sync set-A   /dev/sdf
       3       8       64        3      active sync set-B   /dev/sde

模拟恢复故障的情况,假如已将故障硬盘从服务器拔出,在相同的槽位替换了新的硬盘,再尝试将新的硬盘加入阵列。

因为/dev/sdb和sdd硬盘已经加入过磁盘阵列,我们可以用dd命令将磁盘头部数据擦除,以更好的模拟一块新增的磁盘设备

[root@localhost ~]# dd if=/dev/zero of=/dev/sdb bs=1M count=100 status=progress
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.275985 s, 380 MB/s
[root@localhost ~]# dd if=/dev/zero of=/dev/sdd bs=1M count=100 status=progress
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.270132 s, 388 MB/s

将硬盘加入磁盘阵列

[root@localhost ~]# mdadm /dev/md10 -a /dev/sdb /dev/sdd
mdadm: added /dev/sdb
mdadm: added /dev/sdd
[root@localhost ~]# mdadm -D /dev/md10
/dev/md10:
           Version : 1.2
     Creation Time : Sat Dec 30 16:48:12 2023
        Raid Level : raid10
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 10476544 (9.99 GiB 10.73 GB)
      Raid Devices : 4
     Total Devices : 5
       Persistence : Superblock is persistent

       Update Time : Sat Dec 30 17:01:23 2023
             State : clean, degraded, recovering # 已降级,恢复中
    Active Devices : 3
   Working Devices : 5
    Failed Devices : 0
     Spare Devices : 2

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : resync

    Rebuild Status : 5% complete	# 重构进度:5%

              Name : localhost.localdomain:10  (local to host localhost.localdomain)
              UUID : a6031ef1:e352c58e:aa26ef23:4165550a
            Events : 43

    Number   Major   Minor   RaidDevice State
       6       8       48        0      spare rebuilding   /dev/sdd # 、dev/sdd磁盘重构中
       1       8       32        1      active sync set-B   /dev/sdc
       4       8       80        2      active sync set-A   /dev/sdf
       3       8       64        3      active sync set-B   /dev/sde

       5       8       16        -      spare   /dev/sdb	# 现在,/dev/sdb磁盘作为热备盘

等待数据重构完成,再次查看阵列状态

[root@localhost ~]# mdadm -D /dev/md10
/dev/md10:
           Version : 1.2
     Creation Time : Sat Dec 30 16:48:12 2023
        Raid Level : raid10
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 10476544 (9.99 GiB 10.73 GB)
      Raid Devices : 4
     Total Devices : 5
       Persistence : Superblock is persistent

       Update Time : Sat Dec 30 17:02:15 2023
             State : clean 
    Active Devices : 4
   Working Devices : 5
    Failed Devices : 0
     Spare Devices : 1

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:10  (local to host localhost.localdomain)
              UUID : a6031ef1:e352c58e:aa26ef23:4165550a
            Events : 60

    Number   Major   Minor   RaidDevice State
       6       8       48        0      active sync set-A   /dev/sdd
       1       8       32        1      active sync set-B   /dev/sdc
       4       8       80        2      active sync set-A   /dev/sdf
       3       8       64        3      active sync set-B   /dev/sde

       5       8       16        -      spare   /dev/sdb

参阅

Linux命令-mdadm管理磁盘阵列组-CSDN博客

RAID - ArchWiki

标签:root,Devices,dev,Linux,active,mdadm,localhost,磁盘阵列
From: https://blog.51cto.com/min2000/9041464

相关文章

  • 记一次 Arch Linux 滚完无法开机的解决过程
    滚的时候说空间满了,我没在意。直到没法开机我才意识到事情的严重性。先进安装时用的LiveCD,将/home下的重要文件都备份好。df-lh一看,鉴定为/满了,开始扩容。fdisk/dev/nvme0n1查看磁盘。我的分区是/boot,/,/home,所以先删掉/home和/,再(从/的原起始位置)新建分区,保......
  • EasyCVR在Linux中开启硬件探测配置后,无法启动该如何解决?
    有用户反馈,可视化监控云平台EasyCVR智能边缘网关在Linux系统上开启硬件探测配置后,无法正常启动程序,如下图:收到用户反馈后,技术人员立即开展排查和解决。1)由反馈可见,报错为“LocalMachineCheckError!本地机器检查错误!”此错误是因为程序在校验硬件盒子内程序出错;2)于是排查智能边缘......
  • linux下好玩的shell程序与玩法
    1主要包括如下程序: sudoaptinstalllolcataewancowsayjp2alinuxlogoneoftechfortunepvcmatrixcbonsai2fortune:人们喜欢阅读随机的预测或说法,该工具用来缓解无聊的时光。上边是没有lolcat的效果,下边是加了lolcat的效果就是彩色的。3lolcat:无条件将输入涂上五颜......
  • Linux驱动开发之Linux内核中的中断处理以及相关API和例程分析
    中断是计算机中实现异步事件处理的一种关键机制。当中断发生时,CPU会暂停当前的任务,转去运行中断服务例程。中断处理完成后,CPU再返回到原来的任务。这使得中断处理具有很高的实时性和响应速度。在Linux内核中,充分利用了中断机制来响应各种硬件和软件事件。在Linux操作系统中,中断......
  • linux 中 ls -F选项
     linux中ls-F选项,F表示文件类型。文件末尾追加*表示是可执行文件;文件末尾/表示是目录文件末尾是@表示是软链接文件 001、[root@pc1test]#ls##测试目录a.txtb.txtdir1dir2dir3dir4file1file2file3file4[root@pc1test]#ls-l......
  • linux 中实现仅对指定目录下的目录或者文件单独进行迭代
     001、测试目录如下,分别包含目录、文件[root@pc1test]#ls##测试目录dir1dir2dir3dir4file1file2file3file4 002、仅对目录进行迭代 a、[root@pc1test]#ls##测试目录dir1dir2di......
  • CentOS For Linux搭建过程
    在搭建CentOSLinux的过程中,首先需要准备的软件是VMwareWorkstation虚拟机软件,它是在搭建CentOSLinux的过程中,首先需要准备的软件是VMwareWorkstation虚拟机软件,它是用于创建和运行虚拟机的常用工具。然后你需要下载CentOS7的镜像文件,它是一款免费开源的Linux操作系统,广泛应用......
  • linux初始
    1.linux诞生linux由林纳斯托瓦丝在1991年创立并发展至今成服务器操作系统领域的核心系统2.什么是linux系统的内核内核提供了linux系统的主要功能,如硬件调度管理能力linux内核是免费开源的,任何人都可以查看内核的源代码,甚至是贡献源代码3.什么是linux系统发行版内核无法被......
  • linux 中取文本的最后一列
     001、测试数据,awk实现[root@pc1test]#lsa.txt[root@pc1test]#cata.txt##测试数据01020304050607080910111213141516171819202122232425262728293031323334353637383940......
  • Linux常用工具:grep/awk/sed
    Linux常用工具grep文本过滤sedsteameditor文本编辑工具awk格式化文本Ⅰ.grepgrep(globalregularexpression)命令用于查找文件里符合条件的字符串或正则表达式。命令组成grep[options]pattern[files]逐个解释grep命令的各部分pattern:表示要查找的字符串或......