Thanks for your reply, Wol. Raid10,60,61 sound nice for big disk pool, like CRUSH of Ceph, and also the permutation method of the dRaid of Open ZFS. While one thing I am also wondering is that if there's any efficient method for small sized storage, say <10 disk? Which can be scary if you want to build a RAID6 with the new 10TB disk(for example, 10X10TB disks). The rebuild time can be very very long. I am not sure if there's any support in mdadm for enhanced RAID6EE(RAID5EE), which distribute Hot Spare blocks with data and parity? I am wondering that with the distributed spare blocks, the traditional written bottleneck(write to only one disk) when rebuild can be improved a lot, since now there can be, for example, 10-1 disks for writing? (actually to my understanding, the RAID5EE, or RAID6EE, are kind of a special use case for Permutation method when there's only one RAID stripe group). Best, Feng On Tue, Oct 2, 2018 at 2:09 PM Wols Lists <antlists@xxxxxxxxxxxxxxx> wrote: > > On 02/10/18 14:54, Feng Zhang wrote: > > Hello all, > > > > Any progress on this de-clustered raid? > > > > May be you guys have known already that side from the de-clustered raid > > which distribute data and parity chunks, there's also a method that > > distribute spare chunks with the data and parity, which says can bring a > > fault array(with 1 or more disks failed) back to normal production > > status(rebuild) very quickly. > > > > Like the OpenZFS, https://github.com/zfsonlinux/zfs/wiki/dRAID-HOWTO, > > they use a "Permutation Development Data Layout", which looks very > > promising? > > > It's the usual Open Source answer, it'll get done when someone has the > time/desire to do it, ie it'll probably be me. > > But firstly, linux is a hobby for me at the moment (even though I ?was? > a professional programmer), and secondly, I'm a carer so finding time is > hard. And I've had a rough few months. > > I know what I want to do - I've got the algorithm sorted I think. I'll > add it to raid-10 for testing, and I'll add options for 60 and 61. WHEN > I get time, sorry :-( > > Cheers, > Wol >