Hi. I'm running, lvm2-2.02.26. Found that vgchange -a y/n generates remarkable memory consumption on volumes with larger numbers of small volumes. Tried on a VG with 1024 LVs of 8MB each. Interestingly, the type of operation does not seem to matter much. Notably, deactivating an already unavailable volume group generates similarly high pressure than actually activating it. Looking at the code, I've got only gained a partial understanding where the memory is exactly spent -- and for what. So much I believe I do understand, maybe someone can enlighten me: Iterating through all the volumes pushes the data segment up by about 750k -- per iteration. That memory allocated per volume never seems to get freed. So, together with the memory locking performed in lock_vol, it's only a matter of installed RAM and volume numbers when the OOM killer will kick in. _vg_read() seems to play a role. Apparently replayed twice for each LV (once for lock, then for unlock). To a rather outside observer like me, the path taken to get there seems rather strange. The LV is handed over as a UUID string to lock_vol(). In the '-an' case, this is handed over to lv_deactivate, which will (re-)load both the VG and LV metadata in order to get the respective lvinfo struct. So what I don't really get is: Why is that data reread? Especially the VG metadata. Or am I missing something. Second: why isn't that memory freed after returning from activate_lv? But most importantly: Could this be fixed? Best, Daniel _______________________________________________ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/