On Wed, Aug 10, 2011 at 03:30:22PM +0200, David Pusch wrote: > The objective is to create a redundant system. Shouldn't gluster be writing on > all 6 nodes simultaneously rather than sequentially? Else it would seem like a > rather poor choice for highly redundant systems. I get the redundant part. You'll see in recent discussion here that it's typical to deploy on top of hardware RAID. So you've got redundancy in each node. The if you replicate across 2 nodes, you've got redundancy on top of redundancy. You're far more likely to lose the data center to a catastrophy at that point than to lose your data on both nodes at once. The next move might be geo-replication, not more local replication. Since it's asynchronous, it won't delay local file operations. I'm far from an expert on replication across a network. But consider that you're starting from one copy of the file. Your file is not going to copy to all six Gluster nodes. It's going to copy to one, but that one node is going to copy to each of the five others, and insist on completing those copies while it completes its reception of the file. So your bottleneck is there, between the Gluster server your file has gone to, and the five other systems you've instructed it to write the file to as it comes in. Now, there's a very interesting report yesterday that the FUSE stuff works much better with the latest kernels, if that's a factor for you. But replicating to six systems at once is a strange thing to do. If your systems don't have RAID locally, there could be an argument for replicating to three, so that a coincidental failure of two drives on two of them doesn't cost you data. But six? You expect five systems to fail at once, from a cause that wouldn't take the sixth down too? Whit