Hi, Thanks for the feedback so fare. But it still does not work as expected. I tried to mount the volume with this fstab: my-dev-os-compute1:/glustervmstore /var/lib/nova/instances glusterfs defaults,backupvolfile-server=my-dev-os-compute2 0 1 If I mount the volume with this fstab on my-dev-os-compute4, and I'll do a reboot of my-dev-os-compute3, the volume is gone. root at my-dev-os-compute4:/var/lib/nova/instances# ls ls: reading directory .: Transport endpoint is not connected [2011-11-08 07:55:47.870661] W [fuse-bridge.c:2092:fuse_readdir_cbk] 0-glusterfs-fuse: 6: READDIR => -1 (Transport endpoint is not connected) I mean that looks like a bug? It doesn't matter which brick I shutdown, every time the volume seems to be gone. Regards, Christian 2011/11/2 M. Vale <maurovale at gmail.com>: > > > On 2 November 2011 03:16, Amar Tumballi <amarts at redhat.com> wrote: >> >>> >>> On Tue, Nov 1, 2011 at 9:26 AM, Christian Wittwer <wittwerch at gmail.com> >>> wrote: >>> > I'm mounting the volume with fstab on my-dev-os-compute2 like this: >>> > my-dev-os-compute1:/glustervmstore ? ?/var/lib/nova/instances >>> > glusterfs ? ? ? defaults ? ? ? ?0 ? ? ? 1 >>> >>> Why you don't mount every client from localhost, if all machines join >>> the volume? >>> >>> localhost:/glustervmstore ? ?/var/lib/nova/instances glusterfs defaults 0 >>> 1 >>> >> >> can you try option like below: >> >> my-dev-os-compute1:/glustervmstore /var/lib/nova/instances glusterfs >> defaults,backupvolfile-server=another-server-hostname-or-ip 0 1 >> >> That should give high availability, but if both servers are down, it won't >> work. >> >> Regards, >> Amar >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org >> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users >> > > > Thanks it worked. > > Regards > MV > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users > >