Hi, (using CentOS 6.5 / Gluster 3.6.0beta3). I've been testing various failure scenarios and have managed to upset gluster by changing which bricks are viewable by the client. I've created a volume gv0 with 2 replicas across 2 bricks test1:/data/brick/gv0 and test2:/data/brick/gv0 I edited iptables on the brick servers test1 and test2 such that only test2 was visible to client machine test5, and then made some changes. Some hours later, I edited iptables on test1 and test2 so that now, only test1 was visible to client machine test5. This seemed to upset it (see transcript below). Would it be possible for clients to cache a full list of brick servers and then use that cached list to do $clever_things to maintain connectivity / re-connect automatically? Transcript follows (this was just after I switched iptables round on the bricks after the test5 client had been using just the one accessible brick for a while): (I was sitting in a subdirectory of /mnt/gv0 initially): test5# ls -l ls: cannot open directory .: Transport endpoint is not connected test5# df df: `/mnt/gv0-slave': Transport endpoint is not connected df: `/mnt/gv0': Transport endpoint is not connected Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup-lv_root 5716804 1842760 3583640 34% / tmpfs 508140 0 508140 0% /dev/shm /dev/xvda1 495844 121241 349003 26% /boot test1:gv1 5716736 2299008 3127424 43% /mnt/gv1 test5# mount -t glusterfs test1:gv0 /mnt/gv0 ERROR: Mount point does not exist. Usage: mount.glusterfs <volumeserver>:<volumeid/volumeport> -o <options> <mountpoint> Options: man 8 mount.glusterfs To display the version number of the mount helper: mount.glusterfs --version test5# cd / test5# mount -t glusterfs test1:gv0 /mnt/gv0 ERROR: Mount point does not exist. Usage: mount.glusterfs <volumeserver>:<volumeid/volumeport> -o <options> <mountpoint> Options: man 8 mount.glusterfs To display the version number of the mount helper: mount.glusterfs --version test5# ls -l /mnt/ ls: cannot access /mnt/gv0: Transport endpoint is not connected ls: cannot access /mnt/gv0-slave: Transport endpoint is not connected total 4 d????????? ? ? ? ? ? gv0 d????????? ? ? ? ? ? gv0-slave drwxr-xr-x 3 root root 4096 Oct 24 12:29 gv1 test5# mount /dev/mapper/VolGroup-lv_root on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw) /dev/xvda1 on /boot type ext4 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) none on /proc/xen type xenfs (rw) test4:gv0-slave on /mnt/gv0-slave type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072) test1:gv0 on /mnt/gv0 type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072) test1:gv1 on /mnt/gv1 type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072) test5# umount /mnt/gv0-slave test5# mount /dev/mapper/VolGroup-lv_root on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw) /dev/xvda1 on /boot type ext4 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) none on /proc/xen type xenfs (rw) test1:gv0 on /mnt/gv0 type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072) test1:gv1 on /mnt/gv1 type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072) test5# umount /mnt/gv0 test5# ls -l /mnt total 12 drwxr-xr-x 2 root root 4096 Sep 23 15:55 gv0 drwxr-xr-x 2 root root 4096 Oct 13 15:09 gv0-slave drwxr-xr-x 3 root root 4096 Oct 24 12:29 gv1 -- Cheers, Kingsley. _______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-users