The first thing I would do is change the mount line to add ",nofail" on the options so that a failure does not drop you to single user mode. Then you can boot the system up and with the system on the network figure out the state of things. In the old days I have seen lvm2-lvmetad break systems on boot up in bizarre ways. I disabled it on my systems(and the 1000's of enterprise machines I used to support) because it caused random PV's to not be found sometimes. Typically if something causes a pv to not get found it will be repeatable on that given system(likely some timing problem). The only useful thing it does is it speeds up scans when a disk is spun down and/or when you have 1000's of disks. But it does not speed anything up that much unless you have a huge number of disks that are spun down. On large san systems the testing I did said without it it would take 2-3 seconds to scan 1000's of disks (worth the wait given the random failures that caused havoc), verses immediate. And tiny changes in lvm/udev rules have changed it from working to broken. On Sun, Feb 16, 2025 at 9:11 AM <christophe.ochal@xxxxxxxxx> wrote: > > I'm at present fighting with LVM2, for some weird reason I can't get my > lvm volume > to be activated & mounted at boot resulting in me having to do > "vgchange -ay" at boot (after I'm dropped in a shell & prompted for my > root password,to make debugging even more troublesome I can only mess > with this over the weekends, i've included the lvmdump to this mail, > any more help would be very welcome, as i'm at a loss of how to > proceed. this might also have relevant information. > > https://bugzilla.redhat.com/show_bug.cgi?id=2338735 > > > >