Re: lvm2 weirdness in Fedora 40

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, 2025-02-22 at 10:55 -0600, Roger Heflin wrote:
> On Sat, Feb 22, 2025 at 10:05 AM <christophe.ochal@xxxxxxxxx> wrote:
> > 
> > On Sun, 2025-02-16 at 15:45 -0600, Roger Heflin wrote:
> > > The first thing I would do is change the mount line to add
> > > ",nofail"
> > > on the options so that a failure does not drop you to single user
> > > mode.
> > > 
> > > Then you can boot the system up and with the system on the
> > > network
> > > figure out the state of things.
> > 
> > I wasn't aware of this option, but I'm not sure if this is any
> > better,
> > right now I can just run vgchange -ay end exit to resume the boot
> > process, to end up in the Gnome envronment, if add nofail to /home
> > i
> > stil end up in an unussable state because gnome can't load my
> > user's
> > files
> > 
> > > In the old days I have seen lvm2-lvmetad break systems on boot up
> > > in
> > > bizarre ways.
> > 
> > I'm not sure that fedora uses lvm2-lvmetad, and google isn't
> > helping
> > me,  any hits i find are for red hat 9
> > 
> > I wonder if this is relevant:
> > 
> > From lvm.conf:
> > 
> >  # Configuration option devices/scan_lvs.
> >         # Allow LVM LVs to be used as PVs. When enabled, LVM
> > commands
> > will
> >         # scan active LVs to look for other PVs. Caution is
> > required to
> >         # avoid using PVs that belong to guest images stored on
> > LVs.
> >         # When enabled, the LVs scanned should be restricted using
> > the
> >         # devices file or the filter. This option does not enable
> > autoactivation
> >         # of layered VGs, which requires editing LVM udev rules
> > (see
> > LVM_PVSCAN_ON_LVS.)
> >         # This configuration option has an automatic default value.
> >         # scan_lvs = 1
> > 
> > I had no luck on googling LVM_PVSCAN_ON_LVS
> > 
> > 
> 
> That is only if you put a pv on top of a vg.

I do recall that I have 1 PV on one Volume Group (on top of 5 Spinning
Rust drives and one  PV on top of 2 ssd's that used to function as a
cache that was setup in mirrorring 

the cache has since been retired but I have no idea how to move the
data that is on the PV on that VG to get rid of that one last layer

I hope this is clear, I'm not at the workstation in question at the
moment

> Fedora may have finally got rid of lvmetad so it may not be the
> issue.
> 
> What does cat /proc/cmdline look like?
> 
> If /home is listed as mounting early but is not explicitly in cmdline
> (either a list of LV(rd.lvm.lv=) or VG(rd.lvm.vg) to turn on at boot)
> it will be missing.  And if you configured it after initial install
> and/or changed the name of the LV then it won't get activated early
> and will fail early.   I always use rd.lvm.vg to active everything in
> the boot vg at startup.
> 
> you might try a "systemd-analyze blame" and see what it dumps for
> timers.
> 
> The typical disk missing timeout is like 60-90 seconds were the boot
> should pause(and fail) before it gives you a emergency mode prompt.






[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux