Hello Xavier, it sounds pretty interesting to me. Will you publish any news on this mailing list or is there another place where I can update my self about your progress? Am Fri, 30 Mar 2012 08:33:37 +0200 schrieb Xavier Hernandez <xhernandez@xxxxxxxxxx>: > Sorry, the previous message was intended for Pascal. > > Xavi > > On > 30.03.2012 08:29, Xavier Hernandez wrote: > > > Hello David, > > > > we > aren't the core developers of GlusterFS, but we are developing a new > translator that will be able to implement something similar to a > RAID6. In fact it will be able to have a configurable level of > redundacy. A redundancy of 1 is equivalent to RAID 5; a redundancy of > 2 is equivalent to RAID 6; and higher levels of redundancy are > supported. > > > > We are > also trying to improve performance over replicate by using a new > contention detection and locking mecanism, but no promises about that > yet. > > > > We plan to begin internal tests soon. When we consider it > stable, we will release a first beta. > > > > Xavi > > > > On 29.03.2012 > 17:14, Pascal wrote: > > > >> Am Thu, 29 Mar 2012 11:02:38 -0400 > >> > schrieb David Coulson <david@xxxxxxxxxxxxxxxx>: > >> > >> Sorry for > confusion, I understood you wrong in the first place. Now I > >> guess I > know what you mean and I will think about it. > >> > >> Are there more > suggestions or official plans from the GlusterFS > >> developers? > >> > >>> > Not following. If you have a replica count of 3, you can lose two > boxes in that group and still have access to all your data. It's more > like a 3-way RAID-1 than anything like RAID-6. On 3/29/12 11:00 AM, > Pascal wrote: > >>> > >>>> Am Thu, 29 Mar 2012 10:47:38 -0400 schrieb David > Coulson<david@xxxxxxxxxxxxxxxx [6]>: Hello > David, thanks for your quick reply. I already considered a replica > count of 3 (and six servers at all, correct?), but the problem would > still be that two hard drives from the same "replica group" were not > allowed to fail at the same time. > > >>>> > >>>>> Try doing a distributed-replica with a replica count of 3. > Not really 'RAID-6' comparable, but you can have two nodes fail > without outage. > http://download.gluster.com/pub/gluster/glusterfs/3.2/Documentation/AG/html/sect-Administration_Guide--Setting_Volumes-Distributed_Replicated.html > [5]On 3/29/12 10:39 AM, Pascal wrote: > >>>>> > >>>>>> Hello everyone, I > would like to know if it is possible to setup a GlusterFS installation > which is comparable to a RAID 6? I did some research in the community > and several mailing lists and all I could find were the similar > request from 2009 > (http://gluster.org/pipermail/gluster-users/2009-May/002208.html [1], > http://www.gluster.org/community/documentation/index.ph/Talk:GlusterFS_Roadmap_Suggestions > [2]). I would just like to have a scenario where two GlusterFS > nodes/servers, respectively their hard drives, could fail at the same > time. Thanks in advance! Pascal > _______________________________________________ Gluster-devel mailing > list Gluster-devel@xxxxxxxxxx [3] > https://lists.nongnu.org/mailman/listinfo/gluster-devel [4] > >>>> > _______________________________________________ Gluster-devel mailing > list Gluster-devel@xxxxxxxxxx [7] > https://lists.nongnu.org/mailman/listinfo/gluster-devel [8] > >> > >> > _______________________________________________ > >> Gluster-devel mailing > list > >> Gluster-devel@xxxxxxxxxx > >> > https://lists.nongnu.org/mailman/listinfo/gluster-devel > > > > Links: > ------ > [1] > http://gluster.org/pipermail/gluster-users/2009-May/002208.html > [2] > http://www.gluster.org/community/documentation/index.ph/Talk:GlusterFS_Roadmap_Suggestions > [3] > mailto:Gluster-devel@xxxxxxxxxx > [4] > https://lists.nongnu.org/mailman/listinfo/gluster-devel > [5] > http://download.gluster.com/pub/gluster/glusterfs/3.2/Documentation/AG/html/sect-Administration_Guide--Setting_Volumes-Distributed_Replicated.html > [6] > mailto:david@xxxxxxxxxxxxxxxx > [7] mailto:Gluster-devel@xxxxxxxxxx > [8] > https://lists.nongnu.org/mailman/listinfo/gluster-devel >