>>>>> "Nix" == Nix <nix@xxxxxxxxxxxxx> writes: Nix> On 5 Sep 2020, Brian Allen Vanderburg, II verbalised: >> The idea is actually to be able to use more than two disks, like raid 5 >> or raid 6, except with parity on their own disks instead of distributed >> across disks, and data kept own their own disks as well. I've used >> SnapRaid a bit and was just making some changes to my own setup when I >> got the idea as to why something similar can't be done in block device >> level, but keeping one of the advantages of SnapRaid-like systems which >> is if any data disk is lost beyond recovery, then only the data on that >> data disk is lost due to the fact that the data on the other data disks >> are still their own complete filesystem, and providing real-time updates >> to the parity data. >> >> >> So for instance >> >> /dev/sda - may be data disk 1, say 1TB >> >> /dev/sdb - may be data disk 2, 2TB >> >> /dev/sdc - may be data disk 3, 2TB >> >> /dev/sdd - may be parity disk 1 (maybe a raid-5-like setup), 2TB >> >> /dev/sde - may be parity disk 2 (maybe a raid-6-like setup), 2TB Nix> Why use something as crude as parity? There's *lots* of space Nix> there. You could store full-blown Reed-Solomon stuff in there in Nix> much less space than parity would require with far more Nix> likelihood of repairing even very large errors. A separate Nix> device-mapper target would seem to be perfect for this: like Nix> dm-integrity, only with a separate set of "error-correcting Nix> disks" rather than expanding every sector like dm-integrity does. The problem with parity only disks is that they become hotspots and drag down performance. You need/want to stripe parity/checksums/error correction data across all disks equally so as to get the best performance. There are papers on why no one uses RAID4 because of this. The big trend now seems to be erasure coding, where the parity is striped across the entire cluster, with data stored in varying levels of protection, with some mirrored, some striped, some in varying levels. John