Is it possible to get posix locks in a high availability configuration with glusterfs? Specifically, is it sufficient to have locking enabled on the server side below the AFR layer? |glusterfsd....| |.......glusterfs client.. [u1]---[u1-lock]---+ :=:--afr0--+ [u3]---[u3-lock]---+ | +----client/unify----- [u2]---[u2-lock]---+ | | :=:--afr0--+ | [u4]---[u4-lock]---+ | | [u5-ns]--------------------------+ Here is part of the server configuration ... volume u1 type storage/posix option directory /glusterfs/u1/export end-volume volume u2 type storage/posix option directory /glusterfs/u2/export end-volume volume u1-lock type features/posix-locks subvolumes u1 end-volume volume u2-lock type features/posix-locks subvolumes u2 end-volume I have a bad feeling that in a race condition, half of the clients will check the locks on one brick while the other half will check the other brick ... and that simultaneous transactions may be quite happy with getting one out of two flocks, or only release one of the two when they are done ... &:-)