Xavier, I missed this the first time around - this sounds really awesome. Please do let us know if you need any help. Thanks! JM ----- Original Message ----- > Xavier, this sounds cool. Please keep me and John Mark in the loop. > We > will make sure you are getting all the help you needed from Red Hat. > -ab > > On Thu, Mar 29, 2012 at 11:29 PM, Xavier Hernandez > <xhernandez@xxxxxxxxxx> wrote: > > Hello David, > > > > > > > > we aren't the core developers of GlusterFS, but we are developing a > > new > > translator that will be able to implement something similar to a > > RAID6. In > > fact it will be able to have a configurable level of redundacy. A > > redundancy > > of 1 is equivalent to RAID 5; a redundancy of 2 is equivalent to > > RAID 6; and > > higher levels of redundancy are supported. > > > > > > > > We are also trying to improve performance over replicate by using a > > new > > contention detection and locking mecanism, but no promises about > > that yet. > > > > > > > > We plan to begin internal tests soon. When we consider it stable, > > we will > > release a first beta. > > > > > > > > Xavi > > > > On 29.03.2012 17:14, Pascal wrote: > > > > Am Thu, 29 Mar 2012 11:02:38 -0400 > > schrieb David Coulson <david@xxxxxxxxxxxxxxxx>: > > > > Sorry for confusion, I understood you wrong in the first place. Now > > I > > guess I know what you mean and I will think about it. > > > > Are there more suggestions or official plans from the GlusterFS > > developers? > > > > Not following. If you have a replica count of 3, you can lose two > > boxes in > > that group and still have access to all your data. It's more like a > > 3-way > > RAID-1 than anything like RAID-6. On 3/29/12 11:00 AM, Pascal > > wrote: > > > > Am Thu, 29 Mar 2012 10:47:38 -0400 schrieb David > > Coulson<david@xxxxxxxxxxxxxxxx>: Hello David, thanks for your quick > > reply. I > > already considered a replica count of 3 (and six servers at all, > > correct?), > > but the problem would still be that two hard drives from the same > > "replica > > group" were not allowed to fail at the same time. > > > > Try doing a distributed-replica with a replica count of 3. Not > > really > > 'RAID-6' comparable, but you can have two nodes fail without > > outage. > > http://download.gluster.com/pub/gluster/glusterfs/3.2/Documentation/AG/html/sect-Administration_Guide--Setting_Volumes-Distributed_Replicated.htmlOn > > 3/29/12 10:39 AM, Pascal wrote: > > > > Hello everyone, I would like to know if it is possible to setup a > > GlusterFS > > installation which is comparable to a RAID 6? I did some research > > in the > > community and several mailing lists and all I could find were the > > similar > > request from 2009 > > (http://gluster.org/pipermail/gluster-users/2009-May/002208.html, > > http://www.gluster.org/community/documentation/index.ph/Talk:GlusterFS_Roadmap_Suggestions). > > I would just like to have a scenario where two GlusterFS > > nodes/servers, > > respectively their hard drives, could fail at the same time. Thanks > > in > > advance! Pascal _______________________________________________ > > Gluster-devel mailing list Gluster-devel@xxxxxxxxxx > > https://lists.nongnu.org/mailman/listinfo/gluster-devel > > > > _______________________________________________ Gluster-devel > > mailing list > > Gluster-devel@xxxxxxxxxx > > https://lists.nongnu.org/mailman/listinfo/gluster-devel > > > > _______________________________________________ > > Gluster-devel mailing list > > Gluster-devel@xxxxxxxxxx > > https://lists.nongnu.org/mailman/listinfo/gluster-devel > > > > > > _______________________________________________ > > Gluster-devel mailing list > > Gluster-devel@xxxxxxxxxx > > https://lists.nongnu.org/mailman/listinfo/gluster-devel > > > > > > -- > Anand Babu Periasamy > Blog [ http://www.unlocksmith.org ] > Twitter [ http://twitter.com/abperiasamy ] > > Imagination is more important than knowledge --Albert Einstein > > _______________________________________________ > Gluster-devel mailing list > Gluster-devel@xxxxxxxxxx > https://lists.nongnu.org/mailman/listinfo/gluster-devel >