Re: system boot time regression when using lvm2-2.03.05

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> > > _pvscan_aa
> > >   vgchange_activate
> > >    _activate_lvs_in_vg
> > >     sync_local_dev_names
> > >      fs_unlock
> > >       dm_udev_wait <=== this point!
> > > ```

> Could you explain to us what's happening in this code? IIUC, an
> incoming uevent triggers pvscan, which then possibly triggers VG
> activation. That in turn would create more uevents. The pvscan process
> then waits for uevents for the tree "root" of the activated LVs to be
> processed.
> 
> Can't we move this waiting logic out of the uevent handling? It seems
> weird to me that a process that acts on a uevent waits for the
> completion of another, later uevent. This is almost guaranteed to cause
> delays during "uevent storms". Is it really necessary?
> 
> Maybe we could create a separate service that would be responsible for
> waiting for all these outstanding udev cookies?

Peter Rajnoha walked me through the details of this, and explained that a
timeout as you describe looks quite possible given default timeouts, and
that lvm doesn't really require that udev wait.

So, I pushed out this patch to allow pvscan with --noudevsync:
https://sourceware.org/git/?p=lvm2.git;a=commitdiff;h=3e5e7fd6c93517278b2451a08f47e16d052babbb

You'll want to add that option to lvm2-pvscan.service; we can hopefully
update the service to use that if things look good from testing.

_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux