On 3/12/07, Pooya Woodcock <pooya@xxxxxxxxxxxxxxx> wrote:
Hi Krishna, I'll clarify a bit by saying the main purpose of having cluster/unify plus AFR is to have an expandable grid file system but have redundancy on it too. I don't mind having to restart the daemons or remount the glusterfs volume. I have the funds to build a 20 node glusterfs cluster and I was thinking that 20 machines each with 2x750Gig SATA drives in cluster/ unify would yield around 25-30 or so TB of storage. An EMC san of comparable size will cost us way more than this grid. If you add the AFR portion, and say replicate *:4 , that's like saying you can take down 3 servers and still be guaranteed to have a node in the cluster with the file you're looking for. Please let me know if I'm wrong, and with that in mind, can you provide an example of how to do this on a cluster of 4 servers to start ?
Hi Pooya, Your idea is correct. For the example of how to do it, have you gone through our wiki documentation? http://www.gluster.org/docs/index.php/GlusterFS Let us know if you need any help. Krishna
Thanks, and awesome project! Pooya On Mar 11, 2007, at 1:24 PM, Krishna Srinivas wrote: > Pooya, > > If you change the client.vol you have to restart the glusterfs. > If you change the server.vol you have to restart the glusterfsd. > > It is possible not to restart glusterfs but still grow the > glusterfs volume. > i.e by using an extra glusterfsd which does unify which communicates > with the glusterfs. you can change the configs for this glusterfsd > and restart it. However glusterfs will not work properly till you > start > this glusterfsd process, but you dont have to restart the glusterfs > process. > > Let me know if I need to explain by example. > > Krishna > > On 3/10/07, Pooya Woodcock <pooya@xxxxxxxxxxxxxxx> wrote: >> >> I just had a chance to experiment with GlusterFS this evening and one >> question came up. I am wondering if it is possible to combine types >> cluster/unify and AFR such that you have a mounted glusterfs volume >> that grows in size and also has *:2 on its files, for example. >> >> Thanks, >> >> Pooya >> >> >> _______________________________________________ >> Gluster-devel mailing list >> Gluster-devel@xxxxxxxxxx >> http://lists.nongnu.org/mailman/listinfo/gluster-devel >> _______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxx http://lists.nongnu.org/mailman/listinfo/gluster-devel