> --- glusterfsd.vol > > volume posix > type storage/posix > option directory /home/storage/export > end-volume > > volume locks > type features/locks > subvolumes posix > end-volume > > volume brick > type performance/io-threads > option thread-count 8 > subvolumes locks > end-volume > > volume server > type protocol/server > option transport-type tcp > option auth.addr.brick.allow 10.90.190.90,10.90.190.91 > subvolumes brick > end-volume > > --- glusterfs.vol > > volume remote1 > type protocol/client > option transport-type tcp > option remote-host client1 > option remote-subvolume brick > end-volume > > volume remote2 > type protocol/client > option transport-type tcp > option remote-host client2 > option remote-subvolume brick > end-volume > > volume replicate > type cluster/replicate > subvolumes remote1 remote2 > end-volume > > volume writebehind > type performance/write-behind > option window-size 1MB > subvolumes replicate > end-volume > > volume cache > type performance/io-cache > option cache-size 512MB > subvolumes writebehind > end-volume Well, all looks OK to me. Anyone see anything out of place? > You wrote about AFR. Maybe I use the wrong replication options? > (type cluster/replicate) ? No, AFR is just the original name for replicate. So, you are not seeing self-heal even after a ls -lR /mnt/glusterfs ?? Is there anything in the logs during the trial? If so, post them. Jeff.