Re: Discussion: performance issue on event activation mode

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Oct 18, 2021 at 11:51:27PM +0200, Zdenek Kabelac wrote:
> The more generic solution with auto activation should likely try to 'active'
> as much found complete VGs as it can at any given moment in time.

> ATM lvm2 suffers when it's being running massively parallel - this has not
> been yet fully analyzed - but there is certainly much better throughput if
> there is limitted amount of 'parallel' executed lvm2 commands.

There a couple of possible bottlenecks that we can analyze separately:

1. the bottleneck of processing 100's or 1000's of uevents+pvscans.
2. the bottleneck of large number of concurrent vgchange -aay vgname.

The lvm-activate-vgs services completely avoid 1 by skipping them all, and
it also avoids 2 with one vgchange -aay *.  So, it seems to be pretty
close to an optimal solution, but I am interested to know more precisely
which bottlenecks we're avoiding.

I believe you're suggesting that bottleneck 1 doesn't really exist, and
that we're mainly suffering from 2.  If that's true, then we could
continue to utilize all the uevents+pvsans, and take advantage of them to
optimize the vgchange -aay commands.

That's an interesting idea, and we actually have the capabilities to try
that right now in my latest dev branch.  The commit "hints: new pvs_online
type" will do just that.  It will use the pvs_online files (created by
each uevent+pvscan) to determine which PVs to activate from.

$ pvs
  PV                 VG Fmt  Attr PSize    PFree   
  /dev/mapper/mpatha mm lvm2 a--  <931.01g <931.00g
  /dev/sdc           cd lvm2 a--  <931.01g  931.00g
  /dev/sdd           cd lvm2 a--  <931.01g <931.01g
$ rm /run/lvm/{pvs,vgs}_online/*
$ vgchange -an
  0 logical volume(s) in volume group "cd" now active
  0 logical volume(s) in volume group "mm" now active
$ vgchange -aay
  1 logical volume(s) in volume group "cd" now active
  3 logical volume(s) in volume group "mm" now active
$ vgchange -an
  0 logical volume(s) in volume group "cd" now active
  0 logical volume(s) in volume group "mm" now active
$ pvscan --cache /dev/sdc
  pvscan[929329] PV /dev/sdc online.
$ pvscan --cache /dev/sdd
  pvscan[929330] PV /dev/sdd online.
$ vgchange -aay --config devices/hints=pvs_online
  1 logical volume(s) in volume group "cd" now active
$ pvscan --cache /dev/mapper/mpatha
  pvscan[929338] PV /dev/mapper/mpatha online.
$ vgchange -aay --config devices/hints=pvs_online
  1 logical volume(s) in volume group "cd" now active
  3 logical volume(s) in volume group "mm" now active

vgchange is activating VGs only from the PVs that have been pvscan'ed.  So
if a large volume of uevents+pvscans is not actually a bottleneck, then it
looks like we could use them to optimize the vgchange commands in the
lvm-activate-vgs services.  I'll set up some tests to see how it compares.

Dave

_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/




[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux