That helped bring the lvcreate time down from 2min to 1min so that's an improvement. Thank you. The source of the remaining slowdown is the writing of metadata to my 4 PVs. The writes are small and the arrays are all raid5 so each metadata write is also requiring a read. I'm still at a loss for why this was not a problem when running F15 but the filter is a workable solution for me so I'll leave it alone. --Larkin On 3/26/2012 3:55 PM, Ray Morris wrote: > Put -vvvv on the command and see what takes so long. In our case, > it was checking all of the devices to see if they were PVs. > "All devices" includes LVs, so it was checking LVs to see if they > were PVs, and activating an LV triggered a scan in case it was > a PV, so activating a volume group was especially slow (hours). > The solution was to use "filter" in lvm.conf like this: > > filter = [ "r|^/dev/dm.*|", "r|^/dev/vg-.*|","a|^/dev/sd*|", "a|^/dev/md*|", "r|.*|" ] > > That checks only /dev/sd* and /dev/md*, to see if they are PVs, > skipping the checks of LVs to see if they are also PVs. Since the > device list is cached, use vgscan -vvvv to check that it's checking > the right things and maybe delete that cache first. My rule IS > a bit redundant because I had trouble getting the simpler form > to do what I wanted. I ended up using a belt and suspenders > approach, specifying both "do not scan my LVs" and "scan only > /dev/sd*". _______________________________________________ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/