The disks are seen, the volume groups are seen. When I try to activate the VG I get this:
vgchange -ay vg1
device-mapper: reload ioctl on (253:19) failed: Input/output error
0 logical volume(s) in volume group "vg1" now active
vgchange -ay vg1
device-mapper: reload ioctl on (253:19) failed: Input/output error
0 logical volume(s) in volume group "vg1" now active
I executed 'vgchange -ay vg1 -vvvv -dddd' and this is the only time an error was thrown.
20:53:16.552602 vgchange[10795] device_mapper/libdm-deptree.c:2921 Adding target to (253:19): 0 31256068096 raid raid5_ls 3 128 region_size 32768 3 253:13 253:14 253:15 253:16 253:17 253:18
20:53:16.552609 vgchange[10795] device_mapper/ioctl/libdm-iface.c:1853 dm table (253:19) [ opencount flush ] [16384] (*1)
20:53:16.552619 vgchange[10795] device_mapper/ioctl/libdm-iface.c:1853 dm reload (253:19) [ noopencount flush ] [16384] (*1)
20:53:16.572481 vgchange[10795] device_mapper/ioctl/libdm-iface.c:1903 device-mapper: reload ioctl on (253:19) failed: Input/output error
20:53:16.552602 vgchange[10795] device_mapper/libdm-deptree.c:2921 Adding target to (253:19): 0 31256068096 raid raid5_ls 3 128 region_size 32768 3 253:13 253:14 253:15 253:16 253:17 253:18
20:53:16.552609 vgchange[10795] device_mapper/ioctl/libdm-iface.c:1853 dm table (253:19) [ opencount flush ] [16384] (*1)
20:53:16.552619 vgchange[10795] device_mapper/ioctl/libdm-iface.c:1853 dm reload (253:19) [ noopencount flush ] [16384] (*1)
20:53:16.572481 vgchange[10795] device_mapper/ioctl/libdm-iface.c:1903 device-mapper: reload ioctl on (253:19) failed: Input/output error
I've uploaded two very verbose and debug ridden.
Ignore the naming. It's not a gluster. I was planning on making two and mirroring them in a gluster.
./drae
On Mon, Mar 23, 2020 at 5:14 AM Bernd Eckenfels <ecki@xxxxxxxxxxxxxxxxx> wrote:
_______________________________________________Do you see any dmesg kernel errors when you try to activate the LVs?
GrussBernd
Von: linux-lvm-bounces@xxxxxxxxxx <linux-lvm-bounces@xxxxxxxxxx> im Auftrag von Andrew Falgout <digitalw00t@xxxxxxxxx>
Gesendet: Saturday, March 21, 2020 4:22:04 AM
An: linux-lvm@xxxxxxxxxx <linux-lvm@xxxxxxxxxx>
Betreff: lvm raid5 : drives all present but vg/lvm will not assemble
This started on a Raspberry PI 4 running raspbian. I moved the disks to my Fedora 31 system, that is running the latest updates and kernel. When I had the same issues there I knew it wasn't raspbian.
I've reached the end of my rope on this. The disks are there, all three are accounted for, and the LVM data on them can be seen. But it refuses to activate stating I/O errors.
[root@hypervisor01 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda1 local_storage01 lvm2 a-- <931.51g 0
/dev/sdb1 local_storage01 lvm2 a-- <931.51g 0
/dev/sdc1 local_storage01 lvm2 a-- <931.51g 0
/dev/sdd1 local_storage01 lvm2 a-- <931.51g 0
/dev/sde1 local_storage01 lvm2 a-- <931.51g 0
/dev/sdf1 local_storage01 lvm2 a-- <931.51g <931.51g
/dev/sdg1 local_storage01 lvm2 a-- <931.51g <931.51g
/dev/sdh1 local_storage01 lvm2 a-- <931.51g <931.51g
/dev/sdi3 fedora_hypervisor lvm2 a-- 27.33g <9.44g
/dev/sdk1 vg1 lvm2 a-- <7.28t 0
/dev/sdl1 vg1 lvm2 a-- <7.28t 0
/dev/sdm1 vg1 lvm2 a-- <7.28t 0
[root@hypervisor01 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
fedora_hypervisor 1 2 0 wz--n- 27.33g <9.44g
local_storage01 8 1 0 wz--n- <7.28t <2.73t
vg1 3 1 0 wz--n- 21.83t 0
[root@hypervisor01 ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root fedora_hypervisor -wi-ao---- 15.00g
swap fedora_hypervisor -wi-ao---- 2.89g
libvirt local_storage01 rwi-aor--- <2.73t 100.00
gluster02 vg1 Rwi---r--- 14.55t
The one in question is the vg1/gluster02 lvm group.
I try to activate the VG:
[root@hypervisor01 ~]# vgchange -ay vg1
device-mapper: reload ioctl on (253:19) failed: Input/output error
0 logical volume(s) in volume group "vg1" now active
I've got the debugging output from :
vgchange -ay vg1 -vvvv -dddd
lvchange -ay --partial vg1/gluster02 -vvvv -dddd
Just not sure where I should dump the data for people to look at. Is there a way to tell the md system to ignore the metadata since there wasn't an actual disk failure, and rebuild the metadata off what is in the lvm? Or can I even get the LV to mount, so I can pull the data off.Any help is appreciated. If I can save the data great. I'm tossing this to the community to see if anyone else has an idea of what I can do.
./digitalw00t
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
_______________________________________________ linux-lvm mailing list linux-lvm@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/