Hi Steffen, Sorry there was A mistake in my reply. The Unify schedulers cannot schedule based on whether a sub-volume is read-only or not. Hence when a new file creation operation comes, the scheduler _cannot_ neglect the read-only subvolumes for creating the file. There is a fairly good chance that the file creation operation be scheduled on a read-only subvolume which ofcourse will fail. On 6/30/07, Raghavendra G <raghavendra.hg@xxxxxxxxx> wrote:
Hi Steffen, Please find the answers inlined. On 6/28/07, Steffen Grunewald < steffen.grunewald@xxxxxxxxxx > wrote: > > With glusterfs 1.3 getting close to a releasable state, I started > thinking > about the structure of a global namespace for our applications. > There will be "mirrored machines" (AFR) for crucial data, and there > should > be part of the servers offering read-only stuff (making sure nothing > gets > added to the filesystem) perhaps also on top of an AFR layer (for the > super important, time critical input data repositories). > This results in the following structure (with individual "units" in > varying > counts): > > brick1a > > AFR (mirror) > readonly > brick1b \ > | > brick2a | > > AFR (mirror) > readonly | > brick2b > unify > | > brick3 (AFR if I'm paranoid) | > brick4 (can be AFR) | > brick5 (can be AFR) / > ... > > (Apologies for the ASCII graphics.) > > Bricks 1 and 2 (which come in two copies) would store the "never lose > this" > stuff while it's still possible to work (store new data) on the other > bricks. > > Of course, two features are essential for such a setup: > - a "readonly" xlator (does the "filter" xlator yield this? couldn't > find > anything about "option" syntax) Yes. "Filter" xlator gives the read-only functionality. And it accepts no options. - a "unify" xlator which can handle read-only subvolumes (can it already? > does this have an influence on the choice for schedulers?) There should be no problem with unify handling read-only subvolumes. Is there someone who already implemented this to some extent and is > willing > to share her/his config? Any suggestions which scheduler to choose? > > Cheers, > Steffen > > -- > Steffen Grunewald * MPI Grav.Phys.(AEI) * Am Mühlenberg 1, D-14476 > Potsdam > Cluster Admin * http://pandora.aei.mpg.de/merlin/ * > http://www.aei.mpg.de/ > * e-mail: steffen.grunewald(*)aei.mpg.de * > +49-331-567-{fon:7233,fax:7298} > No Word/PPT mails - http://www.gnu.org/philosophy/no-word-attachments.html > > > > > _______________________________________________ > Gluster-devel mailing list > Gluster-devel@xxxxxxxxxx > http://lists.nongnu.org/mailman/listinfo/gluster-devel > regards, -- Raghavendra G
regards, -- Raghavendra G