On 09/08/2009 01:18 PM, Daniel Maher wrote: > >> For "shared nothing", each node really does need to be fully >> independent and able to make its own decisions. I think the GlusterFS >> folk have the model right in this regard. >> >> The remaining question is whether they have the *implementation* >> right. :-) > > You're taking my statement too far. :) All i meant was that i don't > think the clients should be responsible for replication - that, in my > mind, is the job of the servers. Purposefully so, I think. More like stealing your thread to start one of my own. :-) But, to stay with yours for a second - Shouldn't it be possible to configure GlusterFS such that the server does replication today? That is, the client connects to one of the servers, and the server then has a cluster/replication volume with one local volume, and several remote volumes. Do this on each of the servers. Then, the configuration is for the client to use a cluster/ha volume so that it can connect to multiple servers if one server is down? I haven't tried it myself, but the concept of "servers responsible for replication" seems to be possible to do today. :-) It also forces the understanding of what replication involves. Ultimately, somebody must do the replication, and ultimately, the client must be able to connect to multiple servers. The real difference between the recommended configuration and the configuration I suggest above, is which node is actually responsible for sending (N-1) x each request to the "other" nodes in the replication cluster. Is it client->server bandwidth (client side replication) or server -> server bandwidth (server side replication). The other questions are which model has the most potential for optimization, and which model has the most potential for automatic failure recovery. I think these answers are a bit grey right now. GlusterFS is pushing the envelope for client side replication. Other solutions such as Lustre give up on one or both of metadata or content replication. Cheers, mark -- Mark Mielke<mark at mielke.cc>