首页 > 其他分享 >kvm镜像迁移到openstack集群,发现镜像文件有损

kvm镜像迁移到openstack集群,发现镜像文件有损

时间:2024-06-20 15:09:39浏览次数:10  
标签:... 00 kvm vda2 镜像文件 ff openstack xfs inode

 

因为没有停机然后复制或者是创建出来的镜像文件,有可能系统盘直接损坏了,修复一下之后,重新用它来生成虚拟机,可以正常使用了

 

修复命令:

xfs_repair  /dev/vda2  #这个不行用下面的,如果磁盘繁忙,umount一下

xfs_repair -L  /dev/vda2

 

 


【1】定义虚拟机,发现磁盘文件损坏了

[[email protected] mcw]# virsh define mcw-vq21-cloudservice016.xml 
Domain mcw3 defined from mcw-vq21-cloudservice016.xml

[[email protected] mcw]# virsh start mcw3
Domain mcw3 started

[[email protected] mcw]# virsh console mcw3
Connected to domain mcw3
Escape character is ^]
[    1.684848] XFS (vda2): Metadata corruption detected at xfs_agi_write_verify+0xa8/0xb0 [xfs], xfs_agi block 0x4a3fa02
[    1.686701] XFS (vda2): Unmount and run xfs_repair
[    1.687674] XFS (vda2): First 64 bytes of corrupted metadata buffer:
[    1.688899] ffff88022a7f7c00: 58 41 47 49 00 00 00 01 00 00 00 03 00 31 7f c0  XAGI.........1..
[    1.690575] ffff88022a7f7c10: 00 01 68 80 00 00 f9 8e 00 00 00 02 00 00 00 26  ..h............&
[    1.692296] ffff88022a7f7c20: 00 00 43 c0 ff ff ff ff ff ff ff ff ff ff ff ff  ..C.............
[    1.694060] ffff88022a7f7c30: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff  ................
[    1.695760] XFS (vda2): Corruption of in-memory data detected.  Shutting down filesystem
[    1.696775] XFS (vda2): Please umount the filesystem and rectify the problem(s)
[    1.710076] systemd-fstab-generator[337]: Failed to open /sysroot/etc/fstab: Input/output error

Generating "/run/initramfs/rdsosreport.txt"


Entering emergency mode. Exit the shell to continue.
Type "journalctl" to view system logs.
You might want to save "/run/initramfs/rdsosreport.txt" to a USB stick or /boot
after mounting them and attach it to a bug report.


:/# cat /sysroot/etc/fstab
cat: /sysroot/etc/fstab: Input/output error
:/# xfs_repair -L /dev/vda2

【2】修复磁盘文件,有可能被挂载了,执行卸载然后修复

:/# xfs_repair -L /dev/vda2
xfs_repair: cannot open /dev/vda2: Device or resource busy
:/# ls /dev/vda
vda   vda1  vda2  
:/# ls /dev/vda2 
/dev/vda2
:/# xfs_repair  /dev/vda2
xfs_repair: cannot open /dev/vda2: Device or resource busy
:/# df -h
sh: df: command not found
:/# xfs_repair   /dev/vda1
Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
Phase 5 - rebuild AG headers and trees...
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - traversing filesystem ...
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...
done
:/# xfs_repair   /dev/vda2
xfs_repair: cannot open /dev/vda2: Device or resource busy
:/# umount /dev/vda2
:/# xfs_repair   /dev/vda2
Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
ERROR: The filesystem has valuable metadata changes in a log which needs to
be replayed.  Mount the filesystem to replay the log, and unmount it before
re-running xfs_repair.  If you are unable to mount the filesystem, then use
the -L option to destroy the log and attempt a repair.
Note that destroying the log may cause corruption -- please attempt a mount
of the filesystem before doing this.
:/# xfs_repair -L  /dev/vda2
Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
ALERT: The filesystem has valuable metadata changes in a log which is being
destroyed because the -L option was used.
        - scan filesystem freespace and inode maps...
agi_freecount 38, counted 39 in ag 3
agi unlinked bucket 62 is 510462 in ag 3 (inode=101173758)
agi_freecount 12, counted 11 in ag 0
sb_icount 2700736, counted 2702336
sb_ifree 521, counted 484
sb_fdblocks 2615441, counted 2537173
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
data fork in ino 34544933 claims free block 6779746
data fork in ino 34544933 claims free block 6779747
data fork in regular inode 41280440 claims used block 5980457
correcting nextents for inode 41280440
bad data fork in inode 41280440
cleared inode 41280440
        - agno = 2
        - agno = 3
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 1
        - agno = 0
        - agno = 2
entry "crash.log" in shortform directory 33572697 references free inode 41280440
junking entry "crash.log" in directory inode 33572697
        - agno = 3
data fork in ino 35349723 claims dup extent, off - 496, start - 5980457, cnt 512
correcting nextents for inode 35349723
bad data fork in inode 35349723
cleared inode 35349723
entry "4245F16B0F76" at block 14044 offset 2512 in directory inode 614085 references free inode 23793526
        clearing inode number in entry at offset 2512...
Phase 5 - rebuild AG headers and trees...
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - traversing filesystem ...
bad hash table for directory inode 614085 (no leaf entry): rebuilding
rebuilding directory inode 614085
entry "1946a8d2d31374a30aaaee7ae20de21adb498e4a18dd65112e517a1b20150816-json.log" in directory inode 35349704 points to free inode 35349723
bad hash table for directory inode 35349704 (no data entry): rebuilding
rebuilding directory inode 35349704
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...
Metadata corruption detected at xfs_dir3_block block 0x1aceee0/0x1000
libxfs_writebufr: write verifer failed on xfs_dir3_block bno 0x1aceee0/0x1000
Maximum metadata LSN (14355:22814) is ahead of log (1:2).
Format log to cycle 14358.
releasing dirty buffer (bulk) to free list!done
:/# 

【3】退出,用磁盘文件创建新的虚拟机,查看是否可以正常用,结果是可以

[[email protected] mcw]# virsh destroy mcw3
Domain mcw3 destroyed

[[email protected] mcw]# virsh undefine mcw3
Domain mcw3 has been undefined

[[email protected] mcw]# virsh define mcw-vq21-cloudservice016.xml 
Domain mcw3 defined from mcw-vq21-cloudservice016.xml

[[email protected] mcw]# virsh start mcw3
Domain mcw3 started

[[email protected] mcw]# virsh console mcw3
Connected to domain mcw3
Escape character is ^]

CentOS Linux 7 (Core)
Kernel 4.14.15-1.el7.elrepo.x86_64 on an x86_64

vm-qa-cloudservice016 login: root
Password: 
Last login: Tue Jun  4 15:03:42 on pts/0
[[email protected] ~]# hostname -I
172.17.0.1 172.18.0.1 
[[email protected] ~]# ls /home/apps/
packages  sserver  user_account
[[email protected] ~]# 

 

标签:...,00,kvm,vda2,镜像文件,ff,openstack,xfs,inode
From: https://www.cnblogs.com/machangwei-8/p/18258710

相关文章

  • Openstack 部署笔记
    1、设置主机名hostnamectlset-hostnametemplate2、设置hostscat/etc/hosts127.0.0.1localhostlocalhost.localdomainlocalhost4localhost4.localdomain4openstack192.168.59.20controller192.168.59.31compute01192.168.59.32compute02cpeh-public192.168.5......
  • 跨平台、跨主机共享键鼠方案(KVM)
    背景最近慢慢把开发工作转移到了Ubuntu系统,但由于部分限制,不得不继续使用win电脑的部分功能,于是就有了这么个场景:怎么在日常使用的过程当中,使用一套键鼠设备控制不同主机、系统。针对这些场景我个人使用过3套方案,可以给大家参考评估,选择最合适自己的方案。 方案方......
  • Linux虚拟化技术概览:从KVM到Docker
    Linux虚拟化技术是现代数据中心和云基础设施的核心组成部分,它允许在同一台物理服务器上运行多个独立的操作系统实例,从而提高资源利用率、灵活性和安全性。从KVM到Docker,Linux虚拟化经历了从传统虚拟机到轻量级容器的演进,下面是对这一技术发展路径的一个概览。1.KVM(Kernel-b......
  • OpenStack一键安装部署与配置(全网最详细)
    一,安装环境准备(步骤比较多,建议搭建合理利用虚拟机快照)下载Linux操作系统CentOS7.9镜像:http://mirrors.aliyun.com/centos/7.9.2009/isos/x86_64/CentOS-7-x86_64-DVD-2009.iso;1.创建在VMware中创建实验用的虚拟机实例。CPU2x2,(开启虚拟化引擎),内存4G以上,硬盘80G,NAT网络模式......
  • KVM 概念,以及跟QEMU的关系
    参考链接:https://developer.aliyun.com/article/724394传统软件运行在单个机器设备上的交付形式已经不能满足当下现代应用的交付的需求。现代应用轻量、动态、密集,为了更充分的利用单个机器上的计算、存储和网络,虚拟化技术应运而生。在虚拟化技术的加持下,平台资源被当成一种服......
  • 11、docker-dockerfile--构建docker的镜像文件和容器的挂载卷方法 方式二挂载
    挂载方式二:此方式是在生成镜像的同时也实现挂载1、现在本机创建一个目录文件存放脚本·mkdir/home/docker-volume-test2、创建脚本文件·vim  /home/docker-volume-test/dockerfile01·内容如下:FROMcentos//表示......
  • KVM虚拟化
    KVM虚拟化=============================================================0.环境介绍宿主机:内存4G+纯净的系统CentOS-71:什么是虚拟化?虚拟化,通过模拟计算机的硬件,来实现在同一台计算机上同时运行多个不同的操作系统的技术。2:为什么要用虚拟化?2.1:虚拟化的软件阿里云k......
  • OpenStack是什么?
    OpenStack是一个开源的云计算管理平台项目,它是一系列软件开源项目的组合。该项目由美国国家航空航天局(NASA)和Rackspace合作研发并发起,旨在提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。OpenStack不仅是一个软件,更是一个社区,它拥有超过130家企业及1350位开发者......
  • openstack指定IP
    1先创建port,设置好IP地址。2将该portattach到instance。3在instance中手工配置静态IP,如果设置dhcp。默认重启reboot就好。4567[root@s16071-z2-controller01~]#sourceadmin-openrc89101112[root@s16071-z2-controller01~]#noval......
  • 【虚拟化平台】选对虚拟化引擎:ESXi, Hyper-V, KVM, VirtualBox优劣全览
    虚拟化平台的官方网站VMwareESXi: WhatisESXI|BareMetalHypervisor|ESX|VMwareMicrosoftHyper-V: PagenotfoundKVM(Kernel-basedVirtualMachine):KVM作为Linux内核的一部分,并没有单一的商业网站,但可以参考Linux内核文档或者使用KVM的发行版如RedHatEnte......