hi DM and Linux-RAID, There have been multiple proprietary solutions (some nearly 20 years old now) with a number of (userspace) bugs that are becoming untenable for me as an end user. Basically how they work is a closed MD module (typically administered through DM) that uses RAID4 for a dedicated parity disk across multiple other disks. As there is no striping, the maximum size of the protected data is the size of the parity disk (so a set of 4+8+12+16 disks can be protected by a single dedicated 16 disk).When a block is written on any disk, the parity bit is read from the parity disk again, and updated depending on the existing + new bit value (so writing disk + parity disk spun up). Additionally, if enough disks are already spun up, the parity information can be recalculated from all of the spinning disks, resulting in a single write to the parity disk (without a read on the parity, doubling throughput). Finally any of the data disks can be moved around within the array without impacting parity as the layout has not changed. I don't necessarily need all of these features, just the ability to remove a disk and still access the data that was on there by spinning up every other disk until the rebuild is complete is important. The benefit of this can be the data disks are all zoned, and you can have a fast parity disk and still maintain excellent performance in the array (limited only by the speed of the disk in question + parity). Additionally, should 2 disks fail, you've either lost the parity and data disk, or 2 data disks with the parity and other disks not lost. I was reading through the DM and MD code and it looks like everything may already be there to do this, just needs (significant) stubs to be added to support this mode (or new code). Snapraid is a friendly (and respectable) implementation of this. Unraid and Synology SHR compete in this space, as well as other NAS and enterprise SAN providers. Kyle. -- dm-devel mailing list dm-devel@xxxxxxxxxx https://listman.redhat.com/mailman/listinfo/dm-devel