On 02/27/2013 08:34 AM, Nux! wrote: > To me that sounds just about right, it's the kind of speed I get as > well. If you think of it, what happens in the background is that the 1GB > file is written not only from you to one of the servers, but also from > this server to the other servers, so it gets properly > replicated/distributed. Actually not quite. The data has to go directly from the client to each of the servers, so that client's outbound bandwidth gets divided by N replicas. The way you describe would actually perform better, because then you get to use that first server's outbound bandwidth as well, so for N=2 you'd be running at full speed (though at a slight cost in latency if the writes are synchronous). At N=4 you'd be at 1/3 speed, because one copy has to go from client to first server while three have to go from that first server to the others. Some systems even do "chain" replication where each server sends only to one other, giving an even more extreme tradeoff between bandwidth and latency. In the scenario that started this, the expected I/O rate is wire speed divided by number of replicas. If the measured network performance is 100MB/s (800Mb/s) then with two replicas one would expect no better than 50MB/s and with four replicas no better than 25MB/s. Any better results are likely to be the results of caching, so that the real bandwidth isn't actually being measured. Worse results are often the result of contention, such as very large buffers resulting in memory starvation for something else. I don't know of any reason why results would be worse with distribution, and have never seen that effect myself.