Re: Discussion: performance issue on event activation mode

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dne 18. 10. 21 v 17:04 David Teigland napsal(a):
On Mon, Oct 18, 2021 at 06:24:49AM +0000, Martin Wilck wrote:
I'd like to second Peter here, "RUN" is in general less fragile than
"IMPORT{PROGRAM}". You should use IMPORT{PROGRAM}" if and only if

  - the invoked program can work with incomplete udev state of a device
    (the progrem should not try to access the device via
    libudev, it should rather get properties either from sysfs or the
    uevent's environment variables)
  - you need the result or the output of the program in order to proceed
    with rules processing.

Those are both true in this case.  I can't say I like it either, but udev
rules force hacky solutions on us.  I began trying to use RUN several
months ago and I think I gave up trying to find a way to pass values from
the RUN program back into the udev rule (possibly by writing values to a
temp file and then doing IMPORT{file}).  The udev rule needs the name of
the VG to activate, and that name comes from the pvscan.  For an even
uglier form of this, see the equivalent I wrote for dracut:
https://github.com/dracutdevs/dracut/pull/1567/files

The latest version of the hybrid service+event activation is here
https://sourceware.org/git/?p=lvm2.git;a=shortlog;h=refs/heads/dev-dct-activation-switch-7

I've made it simple to edit lvm.conf to switch between:
- activation from fixed services only
- activation from events only
- activation from fixes services first, then from events

There are sure to be tradeoffs, we know that many concurrent activations
from events are slow, and fixed services which are more serialized could
be delayed from slow devices.  I'm still undecided on the best default
setting, i.e. which will work best for most people, and would welcome any
thoughts or relevant experience.


I've some testing for these issues - we are trimming some 'easy' to fix issues away (so git HEAD should be now be actually already somewhat bit faster).

The more generic solution with auto activation should likely try to 'active' as much found complete VGs as it can at any given moment in time.

ATM lvm2 suffers when it's being running massively parallel - this has not been yet fully analyzed - but there is certainly much better throughput if there is limitted amount of 'parallel' executed lvm2 commands.

Our goal ATM is to accelerate 'pvscan'.

We could think if there is some easy mechanism how to 'accumulate' complete VGs and activate all of them in single 'vgchange' command - and run the next after the running one is finished - this currently gives reasonable good 'throughput' and should work without 'exceptional' case of being fast only once.

Another point for thinking is 'limiting' set of PVs for this activation command - so we avoid repetitive validations of the whole system - for this should be usable option --devices|--devicesfile - but needs some thinking how to use this in smart way with the 'collected' activation.



Regards

Zdenek


_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/




[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux