Hi, when using GlusterFS in a virtualized infrastructure (OpenNebula in my case), it would be nice to implement a simple fencing mechanism to prevent data corruption when the high-availability hook trigger in, and VM instances that were running on a failing host are restarted on a new (working) host. I was wondering if I can use the auth.reject volume option to inhibit access from a host, that have an already mounted the gluster volume early, but I fear that it's used only on volume mount time and is not enforced for already mounted volumes. I would be grateful if someone has some other idea to force the disconnection of a client from a particular host (alternatives that I can think of would be using iptables blocking IP traffic for that particular host). Remember that this should be done from other hosts and not remotely on the failing host itself (because it's failing and I cannot connect to it remotely :)) Thanks for any idea. -- Giovanni Toraldo - LiberSoft http://www.libersoft.it