Thanks for the responses. It turned out that the issue was with the disk in one of the clients. Using the other client machine it appears to be working fine, although it does seem very slow. An ls -l on a directory containing about 150 files took > 5 mins and the rsync will only go at a rate of roughly one file every 3 seconds, average size 30 to 50Kb. If I do the rsync to the client's local HD instead it's many files per second. I have tried adding the following to the end of the client config from my original post but it doesn't appear to have made any noticable difference: volume readahead type performance/read-ahead option page-size 128kB # 256KB is the default option option page-count 4 # 2 is default option option force-atime-update off # default is off subvolumes unify end-volume volume writebehind type performance/write-behind option aggregate-size 1MB # default is 0bytes option flush-behind on # default is 'off' subvolumes readahead end-volume volume io-cache type performance/io-cache option cache-size 64MB # default is 32MB option page-size 1MB # 128KB is default option option priority *:0 # default is '*:0' option force-revalidate-timeout 2 # default is 1 subvolumes writebehind end-volume Can anyone tell me if I have done this correctly and/or suggest anything else I can do to fix this performance issue? Thanks Simon -- View this message in context: http://www.nabble.com/Rsync-failure-problem-tp18420195p18462569.html Sent from the gluster-devel mailing list archive at Nabble.com.