Невозможно активировать lvm raid6 с одним отсутствующим устройством
Я тестирую lvm raid6 (стоковые версии в Ubuntu 18.04) и для этого я создал пять поддельных pvs (файлы, которые открываются с помощью losttup). Я приступил к созданию raid6 vg и файловой системы ext4 на одном lv, используя все пространство.
После этого я создал файл в файловой системе, размонтировал его, деактивировал lv, потерял -d-последнее устройство (loop11), удалил файл резервной копии и заново отсканировал pvs¹. Теперь я не могу активировать lv (который должен работать в деградированном режиме с 4 из 5 устройств).
[¹] Первоначально lvm жаловался, что отсутствующий pv вместо этого был уменьшен до 0, поэтому я перезапустил его демон в этот момент. Он забыл все устройства и после pvscan я мог удалить отсутствующее устройство из vg с помощью vgreduce.
Первый признак проблемы, по-видимому, связан с vgs, когда при запросе показа устройств отображается только часть компонентов raid (согласно моим мелким знаниям lvm raid impl):
# vgs
VG #PV #LV #SN Attr VSize VFree
r6 4 1 0 wz--n- 3,98g 1,28g
# vgs -o +devices
VG #PV #LV #SN Attr VSize VFree Devices
r6 4 1 0 wz--n- 3,98g 1,28g r6_rimage_0(0),r6_rimage_1(0),r6_rimage_2(0),r6_rimage_3(0),r6_rimage_4(0)
Любая комбинация опций в lvchange завершается с ошибкой:
# lvchange -a y /dev/r6/r6 --activationmode degraded
device-mapper: reload ioctl on (253:10) failed: Input/output error
vgreduce кажется счастливым (после первоначального сокращения):
# vgreduce --removemissing r6
Volume group "r6" is already consistent.
Вот подробный вывод vgdisplay / lvdisplay:
# vgdisplay -v
--- Volume group ---
VG Name r6
System ID
Format lvm2
Metadata Areas 4
Metadata Sequence No 14
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 4
Act PV 4
VG Size 3,98 GiB
PE Size 4,00 MiB
Total PE 1020
Alloc PE / Size 692 / 2,70 GiB
Free PE / Size 328 / 1,28 GiB
VG UUID EdFCub-qYdm-04UG-zH17-itL1-QpJu-zw8zg0
--- Logical volume ---
LV Path /dev/r6/r6
LV Name r6
VG Name r6
LV UUID UZ53QI-JvDa-Z10v-ezgZ-hAeD-Sufm-p5zbai
LV Write Access read/write
LV Creation host, time zelda, 2018-09-10 16:02:25 +0200
LV Status NOT available
LV Size <2,02 GiB
Current LE 516
Segments 1
Allocation inherit
Read ahead sectors auto
--- Physical volumes ---
PV Name /dev/loop7
PV UUID FgkYdz-bSbf-1jk9-3GDD-BBnO-ruvH-91JSEc
PV Status allocatable
Total PE / Free PE 255 / 82
PV Name /dev/loop8
PV UUID J0qXyZ-qcMt-gWu7-M4b8-bsdd-n9YF-8gfsYA
PV Status allocatable
Total PE / Free PE 255 / 82
PV Name /dev/loop9
PV UUID AK8MES-pyUq-jc6W-0ViU-Ee7N-vPc3-k2bEAY
PV Status allocatable
Total PE / Free PE 255 / 82
PV Name /dev/loop10
PV UUID v8hcOy-Y0AR-Ytmc-jpFk-ZfGm-lkCR-QHW5Ko
PV Status allocatable
Total PE / Free PE 255 / 82
lvdisplay:
# lvdisplay -vv
devices/global_filter not found in config: defaulting to global_filter = [ "a|.*/|" ]
Setting global/locking_type to 1
Setting global/use_lvmetad to 1
global/lvmetad_update_wait_time not found in config: defaulting to 10
Setting response to OK
Setting protocol to lvmetad
Setting version to 1
Setting global/use_lvmpolld to 1
Setting devices/sysfs_scan to 1
Setting devices/multipath_component_detection to 1
Setting devices/md_component_detection to 1
Setting devices/fw_raid_component_detection to 0
Setting devices/ignore_suspended_devices to 0
Setting devices/ignore_lvm_mirrors to 1
devices/filter not found in config: defaulting to filter = [ "a|.*/|" ]
Setting devices/cache_dir to /run/lvm
Setting devices/cache_file_prefix to
devices/cache not found in config: defaulting to /run/lvm/.cache
Setting devices/write_cache_state to 1
Setting global/use_lvmetad to 1
Setting activation/activation_mode to degraded
metadata/record_lvs_history not found in config: defaulting to 0
Setting activation/monitoring to 1
Setting global/locking_type to 1
Setting global/wait_for_locks to 1
File-based locking selected.
Setting global/prioritise_write_locks to 1
Setting global/locking_dir to /run/lock/lvm
Setting global/use_lvmlockd to 0
Setting response to OK
Setting token to filter:3239235440
Setting daemon_pid to 8879
Setting response to OK
Setting global_disable to 0
report/output_format not found in config: defaulting to basic
log/report_command_log not found in config: defaulting to 0
Setting response to OK
Setting response to OK
Setting response to OK
Setting name to r6
Processing VG r6 EdFCub-qYdm-04UG-zH17-itL1-QpJu-zw8zg0
Locking /run/lock/lvm/V_r6 RB
Reading VG r6 EdFCubqYdm04UGzH17itL1QpJuzw8zg0
Setting response to OK
Setting response to OK
Setting response to OK
Setting name to r6
Setting metadata/format to lvm2
Setting id to FgkYdz-bSbf-1jk9-3GDD-BBnO-ruvH-91JSEc
Setting format to lvm2
Setting device to 1799
Setting dev_size to 2097152
Setting label_sector to 1
Setting ext_flags to 1
Setting ext_version to 2
/dev/loop11: stat failed: No such file or directory
Setting size to 1044480
Setting start to 4096
Setting ignore to 0
Setting id to J0qXyZ-qcMt-gWu7-M4b8-bsdd-n9YF-8gfsYA
Setting format to lvm2
Setting device to 1800
Setting dev_size to 2097152
Setting label_sector to 1
Setting ext_flags to 1
Setting ext_version to 2
Setting size to 1044480
Setting start to 4096
Setting ignore to 0
Setting id to AK8MES-pyUq-jc6W-0ViU-Ee7N-vPc3-k2bEAY
Setting format to lvm2
Setting device to 1801
Setting dev_size to 2097152
Setting label_sector to 1
Setting ext_flags to 1
Setting ext_version to 2
Setting size to 1044480
Setting start to 4096
Setting ignore to 0
Setting id to v8hcOy-Y0AR-Ytmc-jpFk-ZfGm-lkCR-QHW5Ko
Setting format to lvm2
Setting device to 1802
Setting dev_size to 2097152
Setting label_sector to 1
Setting ext_flags to 1
Setting ext_version to 2
Setting size to 1044480
Setting start to 4096
Setting ignore to 0
Stack r6/r6:0[0] on LV r6/r6_rmeta_0:0.
Adding r6/r6:0 as an user of r6/r6_rmeta_0.
Stack r6/r6:0[0] on LV r6/r6_rimage_0:0.
Adding r6/r6:0 as an user of r6/r6_rimage_0.
Stack r6/r6:0[1] on LV r6/r6_rmeta_1:0.
Adding r6/r6:0 as an user of r6/r6_rmeta_1.
Stack r6/r6:0[1] on LV r6/r6_rimage_1:0.
Adding r6/r6:0 as an user of r6/r6_rimage_1.
Stack r6/r6:0[2] on LV r6/r6_rmeta_2:0.
Adding r6/r6:0 as an user of r6/r6_rmeta_2.
Stack r6/r6:0[2] on LV r6/r6_rimage_2:0.
Adding r6/r6:0 as an user of r6/r6_rimage_2.
Stack r6/r6:0[3] on LV r6/r6_rmeta_3:0.
Adding r6/r6:0 as an user of r6/r6_rmeta_3.
Stack r6/r6:0[3] on LV r6/r6_rimage_3:0.
Adding r6/r6:0 as an user of r6/r6_rimage_3.
Stack r6/r6:0[4] on LV r6/r6_rmeta_4:0.
Adding r6/r6:0 as an user of r6/r6_rmeta_4.
Stack r6/r6:0[4] on LV r6/r6_rimage_4:0.
Adding r6/r6:0 as an user of r6/r6_rimage_4.
Setting response to OK
Setting response to OK
Setting response to OK
/dev/loop7: size is 2097152 sectors
/dev/loop8: size is 2097152 sectors
/dev/loop9: size is 2097152 sectors
/dev/loop10: size is 2097152 sectors
Stack r6/r6:0[0] on LV r6/r6_rmeta_0:0.
Adding r6/r6:0 as an user of r6/r6_rmeta_0.
Stack r6/r6:0[0] on LV r6/r6_rimage_0:0.
Adding r6/r6:0 as an user of r6/r6_rimage_0.
Stack r6/r6:0[1] on LV r6/r6_rmeta_1:0.
Adding r6/r6:0 as an user of r6/r6_rmeta_1.
Stack r6/r6:0[1] on LV r6/r6_rimage_1:0.
Adding r6/r6:0 as an user of r6/r6_rimage_1.
Stack r6/r6:0[2] on LV r6/r6_rmeta_2:0.
Adding r6/r6:0 as an user of r6/r6_rmeta_2.
Stack r6/r6:0[2] on LV r6/r6_rimage_2:0.
Adding r6/r6:0 as an user of r6/r6_rimage_2.
Stack r6/r6:0[3] on LV r6/r6_rmeta_3:0.
Adding r6/r6:0 as an user of r6/r6_rmeta_3.
Stack r6/r6:0[3] on LV r6/r6_rimage_3:0.
Adding r6/r6:0 as an user of r6/r6_rimage_3.
Stack r6/r6:0[4] on LV r6/r6_rmeta_4:0.
Adding r6/r6:0 as an user of r6/r6_rmeta_4.
Stack r6/r6:0[4] on LV r6/r6_rimage_4:0.
Adding r6/r6:0 as an user of r6/r6_rimage_4.
Adding r6/r6 to the list of LVs to be processed.
Processing LV r6 in VG r6.
--- Logical volume ---
global/lvdisplay_shows_full_device_path not found in config: defaulting to 0
LV Path /dev/r6/r6
LV Name r6
VG Name r6
LV UUID UZ53QI-JvDa-Z10v-ezgZ-hAeD-Sufm-p5zbai
LV Write Access read/write
LV Creation host, time zelda, 2018-09-10 16:02:25 +0200
LV Status NOT available
LV Size <2,02 GiB
Current LE 516
Segments 1
Allocation inherit
Read ahead sectors auto
Unlocking /run/lock/lvm/V_r6
Setting global/notify_dbus to 1
Есть идеи, чтобы обойти эту ошибку?