Hi Stas, please find the answers inlined. On Mon, Dec 8, 2008 at 1:55 PM, Stas Oskin <stas.oskin at gmail.com> wrote: > Hi. > > Thanks for your answer, it clarifies the matter a bit for me (+ several > hours I spent on Gluster wiki :) ) > > you can have two unify of 5 bricks each and have these two as children of >> afr. something like, >> >> volume unify-0 >> type cluster/unify >> subvolumes n1 n2 n3 n4 n5 >> end-volume >> >> volume unify-1 >> type cluster/unify >> subvolumes n6 n7 n8 n9 n10 >> end-volume >> >> volume afr >> type cluster/afr >> subvolumes unify-0 unify-1 >> end-volume >> > > Several questions if I may: > > 1) In this setup, anything written to volume afr would be actually > duplicated to unify-1 and unify-2, correct? > yes > > 2) I will not be able to track which server the file copies go to - from my > point of view it's 2 pools of storage unified by single > space? > 2 pools of storage replicating each other. Each pool is unify of 5 storage nodes. > > 3) This setup actually means that I will need to add 2 servers every time, > correct? > you mean, every time you want to add new storage capacity?For the normal functioning of glusterfs, its not a requirement. glusterfs can continue to function even If you add a node to only one of the pool. But since the data is always replicated between two unify volumes, its practical to add a node to each of the unify pool. > > 4) What if one of disks in any volume breaks - unify/AFR/client would > overcome it and supply data from another disk, correct? > afr identifies it and serves data from another pool. > > 5) When I bring the disk back, using the "find" approach would re-sync it > to the current state? > yes. When you open the file, afr self heal is triggered and the file is updated to the latest state. > 6) What if I run "find" BEFORE bringing the disk back - would it put the > files on some other disk, or would still require the disk to come back? > Running find before has no effect in terms of healing of the file. The file is healed only when the other node is up and glusterfs finds it to be not of latest version when compared to other copy. > 7) Would this kind of setup function in NUFA environment? > I dont understand the question. Unify has nufa scheduler which prefers local node over other nodes while file creation. Also afr has an option read-subvolume, where you can specify the preferred node for read. Using both the options, one can have nufa kind of environment. > 8) Finally, would it be ever possible to make GlusterFS as transparent > space completely? Meaning, just have one large space, which accepts new > volumes automatically, provides this space to clients, and always insures > there is at least 2 copies present? > The current approach of having two unify of storage nodes will prevail. However, there is a distributed hash translator which does not require namespace cache and hence scalable better than unify. Also "hot-add" functionality is scheduled for future releases which enables automatic addition of nodes without requiring restart of glusterfs. > > > Thanks in advance for your time. > regards, -- Raghavendra G -------------- next part -------------- An HTML attachment was scrubbed... URL: http://zresearch.com/pipermail/gluster-users/attachments/20081208/71b9dafd/attachment-0001.htm