On 6/30/10, Jeff Darcy <jdarcy at redhat.com> wrote: > You can get either, depending on how you define your lower-level > replicate volumes. If you define each to have exactly one app-node > volume and one data-node volume, then it will replicate each file > exactly twice no matter how it's used in higher-level nufa volumes. You > can add multiple storage/posix volumes per data node to gain even more > flexibility in how you do that. That's kind of the genius of the > translator approach: instead of being limited to the combinations that > are embedded in monolithic code, you can combine as many volumes on as > many nodes in as many ways as you want and still end up with a single > purpose-tuned filesystem for clients to use. Just to make sure I got that correct With the nufa volumes, a file is only written to one of the volumes listed in its definition. If the volume is a replicate volume, then the file is replicated on each of the volumes listed in its definition. e.g in this case volume my_nufa type cluster/nufa option local-volume-name rep1 subvolumes rep0 rep1 rep2 end-volume A file is only found in one of rep0 rep1 or rep2. If it was on rep2, then it would be inaccessible if rep2 fails such as network failure cutting rep2 off. Then when I add a rep3, gluster should automatically start putting new files onto it. At this point though, it seems that if I use nufa, I would have an issue if I add a purely storage only rep3 instead of an app+storage node. None of the servers will use it until their local volume reaches max capacity right? :D So if I preferred to have the load spread out more evenly, I should then be using cluster/distribute?