Thank you to everyone who helped me with the Fedora 10 rpmbuild issue. The "--without modglfs" worked like a charm. I would like to do a sanity check on a GlusterFS design I was planning to implement. I have a situation where I have many machines, each with different amounts of disk storage available under a /scratch mountpoint. As a first step, I would like to use the "distribute" translator to distribute files among these machines and present a unified view of the files. Second, I would like to add redundancy, so that each file is stored on at least two machines and can easily be recovered if a node crashes. My idea was to create two directories on each machine called "/scratch/.glusterfs_storage_primary" and "/scratch/.glusterfs_storage_secondary" and expose them as bricks. Then I would then have two instances of the distribute translator, one which would aggregate all the ".glusterfs_storage_primary" bricks, and the other which would aggregate all the ".glusterfs_storage_secondary" bricks. Then, on top of that, I would use the replicate translator with self-healing to replicate files across these two aggregates. My questions: 1) Are there any obvious flaws with this approach I should know about before attempting it? 2) What would I have to do to ensure that both copies of the file did not hash to the same node in the primary and secondary aggregates? My initial thought was that a matter of having the subvolumes be listed in a different order on one distribute translator vs. the other (say, in one the subvolumes are listed in alphabetical order while in the other they are listed in reverse alphabetical order) 3) If I add or remove a subvolume from the distribute translator in such a way as to alter the position of other subvolumes in the list (say, when adding a new subvolume in the middle of an alphabetized list), will this totally mess up the file-to-host hashing and make my existing files inaccessible? Thank you, -- Marcio -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://zresearch.com/pipermail/gluster-users/attachments/20090527/061bd0ed/attachment.htm>