LVM «не запускается» после обновления физических дисков в базовом массиве RAID6 mdadm

До:

  • отдельный загрузочный диск SSD
  • RAID6 4 x 2 ТБ ->
  • ЛВМ PV
  • LVM VG построен на фотомодуле, указанном выше.
  • Разделы данных LVM 2, сформированные в VG
    • установлен на/homeи/mnt/data
  • Я обновился с Ubuntu 20.04 до 22.04 (после перезагрузки все работало)

Вышеописанное работало хорошо, но я обновил диски емкостью 2 ТБ до 3 ТБ и добавил пятый диск, в результате чего получился RAID6 5 x 3 ТБ.

Я не разбивал новые диски, я просто сделалmdadm --fail/mdadm --removeна каждом старом диске емкостью 2 ТБ, а затемmdadm --addдля нового диска емкостью 3 ТБ, позволяя каждому синхронизироваться перед переходом к следующему (следя заcat /proc/mdstatпрогресс).

The mdadm --detail /dev/md0показывает все активно, все 5 дисков в хорошем состоянии. Суммируя,/dev/md0выглядит отлично. Он прекрасно переносит перезагрузки.

      root@home:~# mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Sat Sep  3 11:02:41 2016
        Raid Level : raid6
        Array Size : 8790409728 (8.19 TiB 9.00 TB)
     Used Dev Size : 2930136576 (2.73 TiB 3.00 TB)
      Raid Devices : 5
     Total Devices : 5
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Thu Sep 29 10:53:05 2022
             State : clean
    Active Devices : 5
   Working Devices : 5
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : bitmap

              Name : home:0  (local to host home)
              UUID : 73d52bf2:9c18d305:a554e9ae:e67b7fbc
            Events : 210995

    Number   Major   Minor   RaidDevice State
       4       8       48        0      active sync   /dev/sdd
       5       8        0        1      active sync   /dev/sda
       7       8       32        2      active sync   /dev/sdc
       6       8       16        3      active sync   /dev/sdb
       8       8       64        4      active sync   /dev/sde



root@home:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md0 : active raid6 sde[8] sdd[4] sdc[7] sdb[6] sda[5]
      8790409728 blocks super 1.2 level 6, 512k chunk, algorithm 2 [5/5] [UUUUU]
      bitmap: 0/11 pages [0KB], 131072KB chunk

unused devices: <none>

Однако при загрузке у меня возникают проблемы с настройкой LVM. Если я запускаю команды LVM с помощью-vvмногословие, они показывают, что группа томов LVM «видна», но по какой-то причине это не происходит из-заWARNING: device /dev/md0 is an md component, not setting device for PV.

      root@home:~# pvscan
  WARNING: device /dev/md0 is an md component, not setting device for PV.
  No matching physical volumes found

root@home:~# pvscan -vv
  devices/hints not found in config: defaulting to all
  metadata/record_lvs_history not found in config: defaulting to 0
  global/locking_type not found in config: defaulting to 1
  devices/md_component_checks not found in config: defaulting to auto
  report/output_format not found in config: defaulting to basic
  log/report_command_log not found in config: defaulting to 0
  Locking /run/lock/lvm/P_global RB
  Device /dev/md0 metadata_version is 1.2.
  /dev/loop0: size is 138880 sectors
  /dev/sda: size is 5860533168 sectors
  /dev/md0: size is 17580819456 sectors
  /dev/loop1: size is 98272 sectors
  /dev/loop2: size is 0 sectors
  /dev/loop3: size is 0 sectors
  /dev/loop4: size is 0 sectors
  /dev/loop5: size is 0 sectors
  /dev/loop6: size is 0 sectors
  /dev/loop7: size is 0 sectors
  /dev/sdb: size is 5860533168 sectors
  /dev/sdc: size is 5860533168 sectors
  /dev/sdd: size is 5860533168 sectors
  /dev/sde: size is 5860533168 sectors
  /dev/sdf: size is 117231408 sectors
  /dev/sdf1: size is 117227520 sectors
  /dev/loop0: using cached size 138880 sectors
  /dev/loop0: using cached size 138880 sectors
  /dev/loop0: No lvm label detected
  /dev/sda: using cached size 5860533168 sectors
  /dev/md0: using cached size 17580819456 sectors
  /dev/md0: using cached size 17580819456 sectors
  /dev/md0: lvm2 label detected at sector 1
  /dev/loop1: using cached size 98272 sectors
  /dev/loop1: using cached size 98272 sectors
  /dev/loop1: No lvm label detected
  /dev/sdb: using cached size 5860533168 sectors
  /dev/sdb: using cached size 5860533168 sectors
  /dev/sdc: using cached size 5860533168 sectors
  /dev/sdd: using cached size 5860533168 sectors
  /dev/sde: using cached size 5860533168 sectors
  /dev/sdf: using cached size 117231408 sectors
  /dev/sdf1: using cached size 117227520 sectors
  /dev/sdf1: using cached size 117227520 sectors
  /dev/sdf1: No lvm label detected
  /dev/loop2: using cached size 0 sectors
  /dev/loop3: using cached size 0 sectors
  /dev/loop4: using cached size 0 sectors
  /dev/loop5: using cached size 0 sectors
  /dev/loop6: using cached size 0 sectors
  /dev/loop7: using cached size 0 sectors
  Locking /run/lock/lvm/V_VolGroupArray RB
  /dev/md0: using cached size 17580819456 sectors
  WARNING: device /dev/md0 is an md component, not setting device for PV.
  Unlocking /run/lock/lvm/V_VolGroupArray
  Reading orphan VG #orphans_lvm2.
  No matching physical volumes found
  Unlocking /run/lock/lvm/P_global

root@home:~# vgscan -vv
  devices/hints not found in config: defaulting to all
  metadata/record_lvs_history not found in config: defaulting to 0
  global/locking_type not found in config: defaulting to 1
  devices/md_component_checks not found in config: defaulting to auto
  Locking /run/lock/lvm/P_global RB
  Device /dev/md0 metadata_version is 1.2.
  /dev/loop0: size is 138880 sectors
  /dev/sda: size is 5860533168 sectors
  /dev/md0: size is 17580819456 sectors
  /dev/loop1: size is 98272 sectors
  /dev/loop2: size is 0 sectors
  /dev/loop3: size is 0 sectors
  /dev/loop4: size is 0 sectors
  /dev/loop5: size is 0 sectors
  /dev/loop6: size is 0 sectors
  /dev/loop7: size is 0 sectors
  /dev/sdb: size is 5860533168 sectors
  /dev/sdc: size is 5860533168 sectors
  /dev/sdd: size is 5860533168 sectors
  /dev/sde: size is 5860533168 sectors
  /dev/sdf: size is 117231408 sectors
  /dev/sdf1: size is 117227520 sectors
  /dev/loop0: using cached size 138880 sectors
  /dev/loop0: using cached size 138880 sectors
  /dev/loop0: No lvm label detected
  /dev/sda: using cached size 5860533168 sectors
  /dev/md0: using cached size 17580819456 sectors
  /dev/md0: using cached size 17580819456 sectors
  /dev/md0: lvm2 label detected at sector 1
  /dev/loop1: using cached size 98272 sectors
  /dev/loop1: using cached size 98272 sectors
  /dev/loop1: No lvm label detected
  /dev/sdb: using cached size 5860533168 sectors
  /dev/sdb: using cached size 5860533168 sectors
  /dev/sdc: using cached size 5860533168 sectors
  /dev/sdd: using cached size 5860533168 sectors
  /dev/sde: using cached size 5860533168 sectors
  /dev/sdf: using cached size 117231408 sectors
  /dev/sdf1: using cached size 117227520 sectors
  /dev/sdf1: using cached size 117227520 sectors
  /dev/sdf1: No lvm label detected
  Obtaining the complete list of VGs to process
  report/output_format not found in config: defaulting to basic
  log/report_command_log not found in config: defaulting to 0
  Processing VG VolGroupArray hLJW0y-qc4O-x5oN-sIg0-5Yxi-JR1R-3A0FEV
  Locking /run/lock/lvm/V_VolGroupArray RB
  /dev/md0: using cached size 17580819456 sectors
  WARNING: device /dev/md0 is an md component, not setting device for PV.
  Unlocking /run/lock/lvm/V_VolGroupArray
  Unlocking /run/lock/lvm/P_global

root@home:~# lvscan -vv
  devices/hints not found in config: defaulting to all
  metadata/record_lvs_history not found in config: defaulting to 0
  global/locking_type not found in config: defaulting to 1
  devices/md_component_checks not found in config: defaulting to auto
  report/output_format not found in config: defaulting to basic
  log/report_command_log not found in config: defaulting to 0
  Locking /run/lock/lvm/P_global RB
  Device /dev/md0 metadata_version is 1.2.
  /dev/loop0: size is 138880 sectors
  /dev/sda: size is 5860533168 sectors
  /dev/md0: size is 17580819456 sectors
  /dev/loop1: size is 98272 sectors
  /dev/loop2: size is 0 sectors
  /dev/loop3: size is 0 sectors
  /dev/loop4: size is 0 sectors
  /dev/loop5: size is 0 sectors
  /dev/loop6: size is 0 sectors
  /dev/loop7: size is 0 sectors
  /dev/sdb: size is 5860533168 sectors
  /dev/sdc: size is 5860533168 sectors
  /dev/sdd: size is 5860533168 sectors
  /dev/sde: size is 5860533168 sectors
  /dev/sdf: size is 117231408 sectors
  /dev/sdf1: size is 117227520 sectors
  /dev/loop0: using cached size 138880 sectors
  /dev/loop0: using cached size 138880 sectors
  /dev/loop0: No lvm label detected
  /dev/sda: using cached size 5860533168 sectors
  /dev/md0: using cached size 17580819456 sectors
  /dev/md0: using cached size 17580819456 sectors
  /dev/md0: lvm2 label detected at sector 1
  /dev/loop1: using cached size 98272 sectors
  /dev/loop1: using cached size 98272 sectors
  /dev/loop1: No lvm label detected
  /dev/sdb: using cached size 5860533168 sectors
  /dev/sdb: using cached size 5860533168 sectors
  /dev/sdc: using cached size 5860533168 sectors
  /dev/sdd: using cached size 5860533168 sectors
  /dev/sde: using cached size 5860533168 sectors
  /dev/sdf: using cached size 117231408 sectors
  /dev/sdf1: using cached size 117227520 sectors
  /dev/sdf1: using cached size 117227520 sectors
  /dev/sdf1: No lvm label detected
  Obtaining the complete list of VGs before processing their LVs
  Processing VG VolGroupArray hLJW0y-qc4O-x5oN-sIg0-5Yxi-JR1R-3A0FEV
  Locking /run/lock/lvm/V_VolGroupArray RB
  /dev/md0: using cached size 17580819456 sectors
  WARNING: device /dev/md0 is an md component, not setting device for PV.
  Unlocking /run/lock/lvm/V_VolGroupArray
  Unlocking /run/lock/lvm/P_global

Помогите мне предоставить соответствующие выходные данные, чтобы помочь диагностировать это, пожалуйста.

Редактировать 1:

Следующее — после того, как я закомментировал разделы LVM в/etc/fstab.

выходlsblk:

      root@home:~# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINTS
loop0    7:0    0 67.8M  1 loop  /snap/lxd/22753
loop1    7:1    0   48M  1 loop  /snap/snapd/17029
sda      8:0    0  2.7T  0 disk
└─md0    9:0    0  8.2T  0 raid6
sdb      8:16   0  2.7T  0 disk
└─md0    9:0    0  8.2T  0 raid6
sdc      8:32   0  2.7T  0 disk
└─md0    9:0    0  8.2T  0 raid6
sdd      8:48   0  2.7T  0 disk
└─md0    9:0    0  8.2T  0 raid6
sde      8:64   0  2.7T  0 disk
└─md0    9:0    0  8.2T  0 raid6
sdf      8:80   0 55.9G  0 disk
└─sdf1   8:81   0 55.9G  0 part  /

выходblkid:

      root@home:~# blkid
/dev/sdf1: UUID="751e6495-f77b-49a0-b170-2be146064752" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="96cba4d1-01"
/dev/sdd: UUID="73d52bf2-9c18-d305-a554-e9aee67b7fbc" UUID_SUB="eaa0c0ab-08e9-bbd9-79bf-18386fcee952" LABEL="home:0" TYPE="linux_raid_member"
/dev/sdb: UUID="73d52bf2-9c18-d305-a554-e9aee67b7fbc" UUID_SUB="06927a3a-e179-95d0-ae6d-591faed89962" LABEL="home:0" TYPE="linux_raid_member"
/dev/md0: UUID="PjdxdD-aMVt-5qDb-ZtHJ-tzec-K0ho-l92bbW" TYPE="LVM2_member"
/dev/sde: UUID="73d52bf2-9c18-d305-a554-e9aee67b7fbc" UUID_SUB="34a14cf3-3ebe-c11b-c2d1-289c1b6a3527" LABEL="home:0" TYPE="linux_raid_member"
/dev/sdc: UUID="73d52bf2-9c18-d305-a554-e9aee67b7fbc" UUID_SUB="a24fee08-76a9-09cf-2650-963f1c2431b0" LABEL="home:0" TYPE="linux_raid_member"
/dev/sda: UUID="73d52bf2-9c18-d305-a554-e9aee67b7fbc" UUID_SUB="4dbbe2cb-5cd4-22f1-4dee-26d0079c6d1a" LABEL="home:0" TYPE="linux_raid_member"
/dev/loop1: TYPE="squashfs"
/dev/loop0: TYPE="squashfs"

вывод изmdadm --examine /dev/md0:

      root@home:~# mdadm --examine /dev/md0
mdadm: No md superblock detected on /dev/md0.

вывод изlvmdiskscan:

      root@home:~# lvmdiskscan
  /dev/loop0 [      67.81 MiB]
  /dev/md0   [      <8.19 TiB] LVM physical volume
  /dev/loop1 [      47.98 MiB]
  /dev/sdf1  [     <55.90 GiB]
  0 disks
  3 partitions
  0 LVM physical volume whole disks
  1 LVM physical volume

0 ответов

Другие вопросы по тегам