>> 1) edit lvm.conf and disable all metadata caching > > I have searched the lvm documentation, but could not find anything about > metadata caching. write_cache_state = 0 use_lvmetad = 0 I also move the cache_dir out of /etc to a ephemeral location. /var/lock is tmpfs on my machines. cache_dir = "/var/lock/subsys" >> 2) edit lvm.conf and set the locking style to '4' and set > wait_for_locks=0 > > The clients have a local volume group besides the shared one. Setting the > locking style to 4 prevents > modification of this local volume group, too, so it does not seem to be an > option. You need 2 different lvm.conf's. Use the filter directive so that one can only see the local devices, and the other one only the shared devices. filter = [ "a|local_devices|", ... "r|shared_devices" ] vs filter = [ "a|shared_devices|", ... "r|local_devices" ] Run pvscan and pvs and make sure the exclusions are happening correctly. > Each LV is statically assigned to one single host. That's an important distinction! You can use just a single lvm.conf and a well crafted volume_list. I haven't played with tags but that might be rather useful as well. volume_list = [ "vglocal", "vg_shared/lv_mine", "@mytag1" ] > As I tried to describe above, the hosts do not behave as master or slave. Nevermind then. I was implementing a full-on Active/Active storage head that could take over each other's volumes at will. > Is this possible with or without cLVM? without. _______________________________________________ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/