Hello.
I am looking for advice on how to act in such a situation. Perhaps anyone has gone through something similar?
We used lvm thin provisioning on one of our arrays. But after raid controller failure we got problem with activation of all thin volumes:
# lvchange -a y data/thin
Check of pool data/thin failed (status:1). Manual repair required!
I tried to use lvconvert --repair data/thin, but have this:
# lvconvert --repair data/thin
truncating metadata device to 4161600 4k blocks
bad checksum in superblock, wanted 1494954599
Repair of thin metadata volume of thin pool data/thin failed (status:1). Manual repair required!
I have seen recipes related to thin_dump but I cannot use them, since in particular thin_dump works with a mounted device. Аnd I, due to a faulty superblock, cannot achieve the presence of the LVM partitions in /dev/mapper/ for working such a device
Are there any chances of a successful recovery? And what can be done in such a situation?
Best regards,
Pavel
P.S. And some output:
# pvscan
PV /dev/sdc1 VG data lvm2 [<10.90 TiB / <4.87 TiB free]
# vgs
VG #PV #LV #SN Attr VSize VFree
data 1 11 0 wz--n- <10.90t <4.87t
# lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
[lvol0_pmspare] data ewi------- 16.00g
thin data twi---tz-- 6.00t
[thin_tdata] data Twi------- 6.00t
[thin_tmeta] data ewi------- 16.00g
vm-105-disk-0 data Vwi---tz-- 40.00g thin
vm-105-disk-1 data Vwi---tz-- 500.00g thin
vm-107-disk-1 data Vwi---tz-- 40.00g thin
vm-107-disk-2 data Vwi---tz-- 40.00g thin
vm-107-disk-3 data Vwi---tz-- 500.00g thin
vm-123-disk-0 data Vwi---tz-- 32.00g thin
vm-148-disk-0 data Vwi---tz-- 1.17t thin
vm-161-disk-0 data Vwi---tz-- 32.00g thin
vm-166-disk-0 data Vwi---tz-- 32.00g thin
vm-168-disk-0 data Vwi---tz-- 50.00g thin
# lvscan -a
File descriptor 7 (pipe:[224496]) leaked on lvscan invocation. Parent PID 24064: bash
inactive '/dev/data/thin' [6.00 TiB] inherit
inactive '/dev/data/vm-107-disk-1' [40.00 GiB] inherit
inactive '/dev/data/vm-107-disk-2' [40.00 GiB] inherit
inactive '/dev/data/vm-107-disk-3' [500.00 GiB] inherit
inactive '/dev/data/vm-105-disk-0' [40.00 GiB] inherit
inactive '/dev/data/vm-105-disk-1' [500.00 GiB] inherit
inactive '/dev/data/vm-148-disk-0' [1.17 TiB] inherit
inactive '/dev/data/vm-161-disk-0' [32.00 GiB] inherit
inactive '/dev/data/vm-166-disk-0' [32.00 GiB] inherit
inactive '/dev/data/vm-123-disk-0' [32.00 GiB] inherit
inactive '/dev/data/vm-168-disk-0' [50.00 GiB] inherit
inactive '/dev/data/lvol0_pmspare' [16.00 GiB] inherit
inactive '/dev/data/thin_tmeta' [16.00 GiB] inherit
inactive '/dev/data/thin_tdata' [6.00 TiB] inherit
I am looking for advice on how to act in such a situation. Perhaps anyone has gone through something similar?
We used lvm thin provisioning on one of our arrays. But after raid controller failure we got problem with activation of all thin volumes:
# lvchange -a y data/thin
Check of pool data/thin failed (status:1). Manual repair required!
I tried to use lvconvert --repair data/thin, but have this:
# lvconvert --repair data/thin
truncating metadata device to 4161600 4k blocks
bad checksum in superblock, wanted 1494954599
Repair of thin metadata volume of thin pool data/thin failed (status:1). Manual repair required!
I have seen recipes related to thin_dump but I cannot use them, since in particular thin_dump works with a mounted device. Аnd I, due to a faulty superblock, cannot achieve the presence of the LVM partitions in /dev/mapper/ for working such a device
Are there any chances of a successful recovery? And what can be done in such a situation?
Best regards,
Pavel
P.S. And some output:
# pvscan
PV /dev/sdc1 VG data lvm2 [<10.90 TiB / <4.87 TiB free]
# vgs
VG #PV #LV #SN Attr VSize VFree
data 1 11 0 wz--n- <10.90t <4.87t
# lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
[lvol0_pmspare] data ewi------- 16.00g
thin data twi---tz-- 6.00t
[thin_tdata] data Twi------- 6.00t
[thin_tmeta] data ewi------- 16.00g
vm-105-disk-0 data Vwi---tz-- 40.00g thin
vm-105-disk-1 data Vwi---tz-- 500.00g thin
vm-107-disk-1 data Vwi---tz-- 40.00g thin
vm-107-disk-2 data Vwi---tz-- 40.00g thin
vm-107-disk-3 data Vwi---tz-- 500.00g thin
vm-123-disk-0 data Vwi---tz-- 32.00g thin
vm-148-disk-0 data Vwi---tz-- 1.17t thin
vm-161-disk-0 data Vwi---tz-- 32.00g thin
vm-166-disk-0 data Vwi---tz-- 32.00g thin
vm-168-disk-0 data Vwi---tz-- 50.00g thin
# lvscan -a
File descriptor 7 (pipe:[224496]) leaked on lvscan invocation. Parent PID 24064: bash
inactive '/dev/data/thin' [6.00 TiB] inherit
inactive '/dev/data/vm-107-disk-1' [40.00 GiB] inherit
inactive '/dev/data/vm-107-disk-2' [40.00 GiB] inherit
inactive '/dev/data/vm-107-disk-3' [500.00 GiB] inherit
inactive '/dev/data/vm-105-disk-0' [40.00 GiB] inherit
inactive '/dev/data/vm-105-disk-1' [500.00 GiB] inherit
inactive '/dev/data/vm-148-disk-0' [1.17 TiB] inherit
inactive '/dev/data/vm-161-disk-0' [32.00 GiB] inherit
inactive '/dev/data/vm-166-disk-0' [32.00 GiB] inherit
inactive '/dev/data/vm-123-disk-0' [32.00 GiB] inherit
inactive '/dev/data/vm-168-disk-0' [50.00 GiB] inherit
inactive '/dev/data/lvol0_pmspare' [16.00 GiB] inherit
inactive '/dev/data/thin_tmeta' [16.00 GiB] inherit
inactive '/dev/data/thin_tdata' [6.00 TiB] inherit
_______________________________________________ linux-lvm mailing list linux-lvm@xxxxxxxxxx https://listman.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/