Forgot to add the journal output, though I do not think this raises chances:
kernel: device-mapper: table: 253:8: thin: Couldn't open thin internal
device
kernel: device-mapper: ioctl: error adding target to table
Am 11.01.20 um 23:00 schrieb Ede Wolf:
So I reverted (swapped) to the _meta0 backup, that had been created by
--repair, that brought me back to the transaction error, then I did a
vgcfgbackup and changed the transaction id to what lvm was expecting and
restored it, and, wohoo, the thinpool can be activated again.
However, when trying to activate an actual volume within that thinpool:
# lvchange -ay VG_Raid6/data
device-mapper: reload ioctl on (253:8) failed: No data available
And that message holds true for all lv of that thinpool.
Am 11.01.20 um 18:57 schrieb Ede Wolf:
After having swapped a 2,2T thinpool metadata device for a 16GB one,
I've run into a transaction id mismatch. So run lconvert --repair on
the thinvolume - in fact, I've had to run the repair twice, as the
transaction id error persisted after the first run.
Now ever since I cannot activate the thinpool any more:
[root]# lvchange -ay VG_Raid6/ThinPoolRaid6
WARNING: Not using lvmetad because a repair command was run.
Activation of logical volume VG_Raid6/ThinPoolRaid6 is prohibited
while logical volume VG_Raid6/ThinPoolRaid6_tmeta is active.
So disable them and try again:
[root]# lvchange -an VG_Raid6/ThinPoolRaid6_tdata
WARNING: Not using lvmetad because a repair command was run.
[root]# lvchange -an VG_Raid6/ThinPoolRaid6_tmeta
WARNING: Not using lvmetad because a repair command was run.
[root]# lvchange -ay VG_Raid6/ThinPoolRaid6
WARNING: Not using lvmetad because a repair command was run.
device-mapper: resume ioctl on (253:3) failed: Invalid argument
Unable to resume VG_Raid6-ThinPoolRaid6-tpool (253:3).
And from the journal:
kernel: device-mapper: thin: 253:3: metadata device (4145152 blocks)
too small: expected 4161600
kernel: device-mapper: table: 253:3: thin-pool: preresume failed,
error = -22
Despite not using ubuntu, I may have been bitten by this bug(?), as my
new metadata partion happens to be 16GB:
"If pool meta is 16GB , lvconvert --repair will destroy logical volumes."
https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1625201
Is there any way to make the data accessible again?
lvm2 2.02.186
_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/