Hi Anthony, On Wed, Dec 30, 2009 at 6:30 PM, Anthony Goddard <agoddard at mbl.edu> wrote: > Hi Tejas, > Thanks for the advice. I will be using RAID as well as gluster replication > I think.. as we'll only need to sacrifice 1 drive per raid set to add a bit > of extra redundancy. > > The rebuild happens at the first access of a file, does this mean that the > entire brick/node is rebuilt upon an initial file access? No, only the file which is accessed is rebuilt. That is the reason we recursively access all the files using 'ls -laR' on mount point. > I think this is what I've seen from using gluster previously. If this is > the case, it would rebuild the entire volume which could span many raid > volumes or even machines, is this correct? If this is the case, then the > underlying disk wouldn't have any effect at all, but if it's spanned over > multiple machines and it only needs to rebuild one machine (or multiple > volumes on one machine) it only needs to rebuild one volume. > I don't know if that made any sense.. haha.. but if it did, any insights > into whether the size of the volumes (aside from RAID rebuilds) will have a > positive effect on glusters rebuild operations? > > > Cheers, > Ant. > > > On Dec 30, 2009, at 2:56 AM, Tejas N. Bhise wrote: > > > Anthony, > > > > Gluster can take the smaller ( 6TB ) volumes and aggregate them into a > large Gluster volume ( as seen from the clients ). So that takes care of > managebility on the client side of things. On the server side, once you make > those smaller 6 TB volumes, you will depend on RAID to rebuild the disk > behind it, so its good to have a smaller partition. Since you are using RAID > and not Gluster replication, it might just make sense to have smaller RAID > partitions. > > > > If instead you were using Gluster replication and resulting recovery, it > would happen at first access of the file and the size of the Gluster volume > or the backend native FS volume or the RAID ( or raw ) partition behind it > would not be much of a consideration. > > > > Regards, > > Tejas. > > > > ----- Original Message ----- > > From: "Anthony Goddard" <agoddard at mbl.edu> > > To: gluster-users at gluster.org > > Sent: Wednesday, December 30, 2009 3:24:35 AM GMT +05:30 Chennai, > Kolkata, Mumbai, New Delhi > > Subject: volume sizes > > > > First post! > > We're looking at setting up 6x 24 bay storage servers (36TB of JBOD > storage per node) and running glusterFS over this cluster. > > We have RAID cards on these boxes and are trying to decide what the best > size of each volume should be, for example if we present the OS's (and > gluster) with six 36TB volumes, I imagine rebuilding one node would take a > long time, and there may be other performance implications of this. On the > other hand, if we present gluster / the OS's with 6x 6TB volumes on each > node, we might have more trouble in managing a larger number of volumes. > > > > My gut tells me a lot of small (if you can call 6TB small) volumes will > be lower risk and offer faster rebuilds from a failure, though I don't know > what the pros and cons of these two approaches might be. > > > > Any advice would be much appreciated! > > > > > > Cheers, > > Anthony > > _______________________________________________ > > Gluster-users mailing list > > Gluster-users at gluster.org > > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users > -- Raghavendra G