On Mon, Apr 23, 2012 at 03:46:01PM +0100, lejeczek wrote: > but is a true server-side replication? No you're right, it's driven from the client side I believe. This is so that the client can connect to either server if the other is down. > if I'm not mistaken, afr would take take care of it while client(fuse) > would suffice if only map/connect one brick > lets say two nodes/peers are clients at the same time, both > clients/bricks would only mountpoint themselves on 127.0.0.1 and > replication would still work, does it? Sorry, I don't understand that question. Using the native client, the mount is only used to make initial contact to retrieve the volume info. After that point, the client talks directly to the brick(s) it needs to, as defined in the volume info. So if you mount 127.0.0.1:/foo /foo (because 127.0.0.1 happens to be one of the nodes in the cluster), and volume /foo contains server1:/brick1 and server2:/brick2, then the client will talk to "server1" and/or "server2" when reading and writing files. On server1, you could put "127.0.0.1 server1" in the hosts file if you like to force communication over that IP, but in practice using server1's public IP is fine - it's still a loopback communication. Indeed, if you have three nodes in your cluster, you can mount server3:/foo /foo and once the volume is mounted, data transfer will only take place between the client and server1/server2. (This is the native client remember - NFS is different, the traffic will hit server3 and then be forwarded to server1 and/or server2 as required) The only other "server side" replication that I know of is geo-replication. Regards, Brian.