Hmmm ? I wonder what?s different now when it behaves as expected versus before when it behaved badly? Well ? by now both systems have been up and running in my testbed for several days. I?ve umounted and mounted the volumes a bunch of times. But thinking back ? the behavior changed when I mounted the volume on each node with the other node as the backupvolfile. On fw1: mount -t glusterfs -o backupvolfile-server=192.168.253.2 192.168.253.1:/firewall-scripts /firewall-scripts And on fw2: mount -t glusterfs -o backupvolfile-server=192.168.253.1 192.168.253.2:/firewall-scripts /firewall-scripts Since then, I?ve stopped and restarted glusterd and umounted and mounted the volumes again as set up in fstab without the backupvolfile. But maybe that backupvolfile switch set some parameter permanently. Here is the rc.local I set up in each node. I wonder if some kind of timing thing is going on? Or if -o backupvolfile-server=(the other node) permanently cleared a glitch from the initial setup? I guess I could try some reboots and see what happens. #!/bin/sh # # This script will be executed *after* all the other init scripts. # You can put your own initialization stuff in here if you don't # want to do the full Sys V style init stuff. # # Note removed by default starting in Fedora 16. touch /var/lock/subsys/local #*********************************** # Local stuff below echo "Making sure the Gluster stuff is mounted" mount -av # The fstab mounts happen early in startup, then Gluster starts up later. # By now, Gluster should be up and running and the mounts should work. # That _netdev option is supposed to account for the delay but doesn't seem # to work right. echo "Starting up firewall common items" /firewall-scripts/etc/rc.d/common-rc.local [root at chicago-fw1 log]# Here is what fstab looks like on each node.