Hello, I'm trying to get a new gluster3 behaving like my old gluster2. This message is inspired by messages and postings such as these ones: - http://prefetch.net/blog/index.php/2011/11/27/some-interesting-insights-on-the-gluster-replicated-volume-replica-value/ - http://gluster.org/pipermail/gluster-users/2011-February/006598.html The authors of the above are taking this from the approach that gluster can't handle adding single nodes as needed. However, in my case, I have this working perfectly on gluster2, and want to replicate the same setup on gluster3. Here is a quick rundown of the setup: - Host alpha has a directory only it writes to at /srv/clusterfs - The gluster on each host is at /mnt/clusterfs - alpha also has the gluster mounted on /mnt/clusterfs So, a really simple setup, just replication for having the data available locally on each server. It replaced an old rsync cron job years ago. The glusterfs.vol for gluster2 in this setup looks like this: volume alpha type protocol/client option transport-type tcp option remote-host 192.168.1.26 option remote-subvolume brick end-volume volume nu type protocol/client option transport-type tcp option remote-host 192.168.1.22 option remote-subvolume brick end-volume (...) volume replicate type cluster/replicate subvolumes alpha nu xi omicron sigma end-volume The glsuterfsd.vol on alpha has, amongst other things, this in it: volume posix type storage/posix option directory /srv/clusterfs option mandate-attribute no end-volume As needed, a new volume was added when a host was added, and this has been working splendidly for about four years. To recap: - alpha:/srv/clusterfs <-- where the files are - alpha|beta|gamma:/mnt/clusterfs <-- where the files replicate How can I get the same setup with gluster3? IS there a volume type "storage/posix"? The nfs-server.vol file looks strikingly similar to the above, I'm guessing that's not the main config. I really don't want to go back to rsync. Many thanks, --Pat