Here's my setup. I have 2 servers with the exact hardware and OS configuration. Each server is also a client. I've tried to do a "DRBD" style setup using GlusterFS. Let's say I have gluserfs mounted on /mnt/gluster On Remote 1's shell I issue this command: cp /mnt/gluster/test1000mbfile /mnt/gluster/test100mbfile-- The transfer rate to Server 2 reaches about 30-40 MB/s. The problem is if I do this same thing on Server 2, the transfer rate to Server 1 is only about 10-15 MB/s Does anyone have any idea what is causing the slow performance from Server 2 to Server 1? Server 2 is capable of sending at 40MB/s to Server 1 in an SCP transfer, so I've ruled out network or hardware issues. ===================================================== Both servers have the same glusterfsd.vol and glusterfs.vol files: Client File: +------------------------------------------------------------------------------+ 1: volume remote1 2: type protocol/client 3: option transport-type tcp 4: option remote-host xx.xx.xx.xx 5: option remote-subvolume brick 6: end-volume 7: 8: volume remote2 9: type protocol/client 10: option transport-type tcp 11: option remote-host xx.xx.xx.xx 17: option remote-subvolume brick 18: end-volume 19: 20: volume replicate 21: type cluster/replicate 22: subvolumes remote1 remote2 23: end-volume 24: 25: volume writebehind 26: type performance/write-behind 27: option window-size 1MB 28: subvolumes replicate 29: end-volume 30: 31: volume cache 32: type performance/io-cache 33: option cache-size 512MB 34: subvolumes writebehind 35: end-volume Server ================= 1: volume posix 2: type storage/posix 3: option directory /data/export 4: end-volume 5: 6: volume locks 7: type features/locks 8: subvolumes posix 9: end-volume 10: 11: volume brick 12: type performance/io-threads 13: option thread-count 8 14: subvolumes locks 15: end-volume 16: 17: volume server 18: type protocol/server 19: option transport-type tcp 20: option auth.addr.brick.allow * 21: subvolumes brick 22: end-volume