多个物理磁盘挂载到同一目录的方法 (lvm 软raid)
背景
公司里面的一台申威3231的机器
因为这个机器的raid卡没有操作界面.
所以只能够通过命令行方式创建raid
自己这一块比较菜, 想着先尝试使用lvm的方式进行软raid挂载,也验证一下性能.
所以写一些这个文章
简单的结论
虽然lvm可以讲多个硬盘合并到一块去
可以扩充目录的空间,
但是通过fio的测试发现, 性能跟单盘相比几乎没有任何变化,甚至有所衰减.
所以lvm的这种机制的性能提示非常有限(甚至有害.).
还是应该使用 raid卡进行处理.
ext4 总计IOPS 120.8k
xfs 总结IOPS 127.5k
单SSD 总计IOPS 137.5k
注意合计IOPS仅是一个角度. 我这边的计算命令为:
分别计算 单位不同的两个数据. 然后最后计算最终结果.
cat xfs |grep IOPS |awk -F "=" '{print $2}'|awk -F "," '{print $1}' |grep -v k |awk 'BEGIN{sum=0}{sum+=$1}END{print sum}'
cat xfs |grep IOPS |awk -F "=" '{print $2}'|awk -F "," '{print $1}' |grep k |awk 'BEGIN{sum=0}{sum+=$1}END{print sum}'
过程-初始化磁盘
df -Th
看到的其实都是已经挂载的目录
lsblk
可以看到所有的硬盘信息
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 1.8T 0 disk
sdb 8:16 0 1.8T 0 disk
sdc 8:32 0 1.8T 0 disk
sdd 8:48 0 1.8T 0 disk
├─sdd1 8:49 0 1G 0 part /boot
可以看到三个盘都是原始状态.
所以需要挨个进行处理
fdisk /dev/sda
操作过程如下:
欢迎使用 fdisk (util-linux 2.34)。
更改将停留在内存中,直到您决定将更改写入磁盘。
使用写入命令前请三思。
设备不包含可识别的分区表。
创建了一个磁盘标识符为 0xfe7c6f0f 的新 DOS 磁盘标签。
# 命令1 新建分区
命令(输入 m 获取帮助):n
分区类型
p 主分区 (0个主分区,0个扩展分区,4空闲)
e 扩展分区 (逻辑分区容器)
# 命令2 设置为主分区
选择 (默认 p):p
分区号 (1-4, 默认 1):
第一个扇区 (2048-3750748847, 默认 2048):
最后一个扇区,+/-sectors 或 +size{K,M,G,T,P} (2048-3750748847, 默认 3750748847):
# 这一些默认就可以
创建了一个新分区 1,类型为“Linux”,大小为 1.8 TiB。
# 命令3 更换磁盘类型
命令(输入 m 获取帮助):t
已选择分区 1
# 命令4 lvm 使用的代号是8e
Hex 代码(输入 L 列出所有代码):8e
已将分区“Linux”的类型更改为“Linux LVM”。
# 命令5 保存修改配置.
命令(输入 m 获取帮助):w
分区表已调整。
将调用 ioctl() 来重新读分区表。
正在同步磁盘。
过程-创建pv
创建pv
pvcreate /dev/sda
pvcreate /dev/sdb
pvcreate /dev/sdc
创建完成后的效果为:
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 1.8T 0 disk
└─sda1 8:1 0 1.8T 0 part
sdb 8:16 0 1.8T 0 disk
└─sdb1 8:17 0 1.8T 0 part
sdc 8:32 0 1.8T 0 disk
└─sdc1 8:33 0 1.8T 0 part
过程-创建vg
创建vg
vgcreate sw_ssd /dev/sda1 /dev/sdb1 /dev/sdc1
可以查看vg的信息:
vgdisplay sw_ssd
--- Volume group ---
VG Name sw_ssd
System ID
Format lvm2
Metadata Areas 3
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 3
Act PV 3
VG Size <5.24 TiB
PE Size 4.00 MiB
Total PE 1373562
Alloc PE / Size 0 / 0
Free PE / Size 1373562 / <5.24 TiB
VG UUID AalcfI-tW0K-sjrk-m9dA-u14l-UKTi-vzcT8l
过程-创建lv
lvcreate -l 100%VG -n sw_lv sw_ssd
# 命令解析:
-I 指定使用vg容量的百分比
-L 后面是要分给lv的大小
-n 新建一个名字为vg_1的lv
lvdisplay
--- Logical volume ---
LV Path /dev/sw_ssd/sw_lv
LV Name sw_lv
VG Name sw_ssd
LV UUID owTM2i-AchR-XKZ4-QBfs-CdUL-FeO8-gpz4Ys
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2023-06-18 13:11:15 +0800
LV Status available
# open 0
LV Size <5.24 TiB
Current LE 1373562
Segments 3
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
过程-创建文件系统以及挂载
mkfs.xfs /dev/sw_ssd/sw_lv
感觉可以多次进行验证了 先创建为xfs文件系统
mkdir -p /data
# 将lv挂载到/data中
mount /dev/sw_ssd/sw_lv /data
# 设置开机自动挂载
vim /etc/fstab
# 增加一行
/dev/sw_ssd/sw_lv /data xfs defaults 0 0
过程-修改文件类型的方法
fuser -mv /data
# 将正在使用的进程关掉.
umount /data
# 取消挂载
mkfs.ext4 /dev/sw_ssd/sw_lv
# 如果需要使用这个文件系统, 那么需要修改 fstab文件.
mount /dev/sw_ssd/sw_lv /data
# 验证修改后的文件系统:
[root@localhost deploy]# df -Th
文件系统 类型 容量 已用 可用 已用% 挂载点
devtmpfs devtmpfs 77G 0 77G 0% /dev
tmpfs tmpfs 127G 24K 127G 1% /dev/shm
tmpfs tmpfs 127G 6.6M 127G 1% /run
tmpfs tmpfs 127G 0 127G 0% /sys/fs/cgroup
/dev/sdd3 xfs 1.1T 21G 1.1T 2% /
tmpfs tmpfs 127G 16K 127G 1% /tmp
/dev/sdd1 ext3 976M 197M 728M 22% /boot
/dev/sdd4 xfs 671G 4.0G 667G 1% /home
/dev/sdd5 xfs 30G 12G 19G 39% /backup
tmpfs tmpfs 26G 0 26G 0% /run/user/990
tmpfs tmpfs 26G 0 26G 0% /run/user/0
/dev/mapper/sw_ssd-sw_lv ext4 5.2T 60M 5.0T 1% /data
性能测试-单盘SSD验证
write128k
write: IOPS=2586, BW=323MiB/s (339MB/s)(4096MiB/12669msec)
read128k
read: IOPS=3914, BW=489MiB/s (513MB/s)(4096MiB/8371msec)
randwrite128k
write: IOPS=2670, BW=334MiB/s (350MB/s)(4096MiB/12272msec)
randread128k
read: IOPS=2768, BW=346MiB/s (363MB/s)(4096MiB/11835msec)
write16k
write: IOPS=8607, BW=134MiB/s (141MB/s)(4035MiB/30001msec)
read16k
read: IOPS=13.6k, BW=213MiB/s (223MB/s)(4096MiB/19265msec)
randwrite16k
write: IOPS=8623, BW=135MiB/s (141MB/s)(4042MiB/30001msec)
randread16k
read: IOPS=6272, BW=98.0MiB/s (103MB/s)(2940MiB/30001msec)
write8k
write: IOPS=10.5k, BW=81.0MiB/s (85.0MB/s)(2460MiB/30001msec)
read8k
read: IOPS=14.6k, BW=114MiB/s (119MB/s)(3414MiB/30001msec)
randwrite8k
write: IOPS=10.1k, BW=78.9MiB/s (82.7MB/s)(2366MiB/30001msec)
randread8k
read: IOPS=7411, BW=57.9MiB/s (60.7MB/s)(1737MiB/30001msec)
write1k
write: IOPS=11.9k, BW=11.6MiB/s (12.1MB/s)(348MiB/30001msec)
read1k
read: IOPS=14.4k, BW=14.1MiB/s (14.8MB/s)(423MiB/30001msec)
randwrite1k
write: IOPS=11.2k, BW=10.9MiB/s (11.4MB/s)(327MiB/30001msec)
randread1k
read: IOPS=8366, BW=8366KiB/s (8567kB/s)(245MiB/30001msec)
性能测试-三盘SSDxfs
write128k
write: IOPS=2615, BW=327MiB/s (343MB/s)(4096MiB/12529msec)
read128k
read: IOPS=2999, BW=375MiB/s (393MB/s)(4096MiB/10925msec)
randwrite128k
write: IOPS=2708, BW=339MiB/s (355MB/s)(4096MiB/12100msec)
randread128k
read: IOPS=2198, BW=275MiB/s (288MB/s)(4096MiB/14903msec)
write16k
write: IOPS=9368, BW=146MiB/s (154MB/s)(4096MiB/27980msec)
read16k
read: IOPS=10.0k, BW=156MiB/s (164MB/s)(4096MiB/26176msec)
randwrite16k
write: IOPS=9134, BW=143MiB/s (150MB/s)(4096MiB/28698msec)
randread16k
read: IOPS=4940, BW=77.2MiB/s (80.9MB/s)(2316MiB/30001msec)
write8k
write: IOPS=11.2k, BW=87.6MiB/s (91.8MB/s)(2627MiB/30001msec)
read8k
read: IOPS=11.6k, BW=90.0MiB/s (95.4MB/s)(2730MiB/30001msec)
randwrite8k
write: IOPS=10.7k, BW=83.6MiB/s (87.7MB/s)(2509MiB/30001msec)
randread8k
read: IOPS=5861, BW=45.8MiB/s (48.0MB/s)(1374MiB/30001msec)
write1k
write: IOPS=12.5k, BW=12.2MiB/s (12.7MB/s)(365MiB/30004msec)
read1k
read: IOPS=13.8k, BW=13.5MiB/s (14.2MB/s)(406MiB/30001msec)
randwrite1k
write: IOPS=12.5k, BW=12.2MiB/s (12.8MB/s)(367MiB/30001msec)
randread1k
read: IOPS=5385, BW=5386KiB/s (5515kB/s)(158MiB/30001msec)
性能测试-三盘SSDext4
write128k
write: IOPS=2366, BW=296MiB/s (310MB/s)(4096MiB/13846msec)
read128k
read: IOPS=2937, BW=367MiB/s (385MB/s)(4096MiB/11156msec)
randwrite128k
write: IOPS=2644, BW=331MiB/s (347MB/s)(4096MiB/12393msec)
randread128k
read: IOPS=2097, BW=262MiB/s (275MB/s)(4096MiB/15619msec)
write16k
write: IOPS=8844, BW=138MiB/s (145MB/s)(4096MiB/29639msec)
read16k
read: IOPS=9838, BW=154MiB/s (161MB/s)(4096MiB/26645msec)
randwrite16k
write: IOPS=8519, BW=133MiB/s (140MB/s)(3994MiB/30001msec)
randread16k
read: IOPS=5092, BW=79.6MiB/s (83.4MB/s)(2387MiB/30001msec)
write8k
write: IOPS=10.4k, BW=81.0MiB/s (84.0MB/s)(2431MiB/30001msec)
read8k
read: IOPS=11.5k, BW=89.9MiB/s (94.2MB/s)(2696MiB/30001msec)
randwrite8k
write: IOPS=9758, BW=76.2MiB/s (79.9MB/s)(2287MiB/30001msec)
randread8k
read: IOPS=5796, BW=45.3MiB/s (47.5MB/s)(1359MiB/30001msec)
write1k
write: IOPS=11.0k, BW=11.7MiB/s (12.2MB/s)(350MiB/30001msec)
read1k
read: IOPS=13.0k, BW=13.7MiB/s (14.3MB/s)(410MiB/30001msec)
randwrite1k
write: IOPS=11.7k, BW=11.4MiB/s (11.0MB/s)(343MiB/30001msec)
randread1k
read: IOPS=5317, BW=5318KiB/s (5446kB/s)(156MiB/30001msec)
标签:write,30001msec,raid,read,MiB,BW,IOPS,lvm,挂载
From: https://www.cnblogs.com/jinanxiaolaohu/p/17489075.html