Re: Linux raid-like idea

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 5 Sep 2020, Brian Allen Vanderburg, II verbalised:

> The idea is actually to be able to use more than two disks, like raid 5
> or raid 6, except with parity on their own disks instead of distributed
> across disks, and data kept own their own disks as well.  I've used
> SnapRaid a bit and was just making some changes to my own setup when I
> got the idea as to why something similar can't be done in block device
> level, but keeping one of the advantages of SnapRaid-like systems which
> is if any data disk is lost beyond recovery, then only the data on that
> data disk is lost due to the fact that the data on the other data disks
> are still their own complete filesystem, and providing real-time updates
> to the parity data.
>
>
> So for instance
>
> /dev/sda - may be data disk 1, say 1TB
>
> /dev/sdb - may be data disk 2, 2TB
>
> /dev/sdc - may be data disk 3, 2TB
>
> /dev/sdd - may be parity disk 1 (maybe a raid-5-like setup), 2TB
>
> /dev/sde - may be parity disk 2 (maybe a raid-6-like setup), 2TB

Why use something as crude as parity? There's *lots* of space there. You
could store full-blown Reed-Solomon stuff in there in much less space
than parity would require with far more likelihood of repairing even
very large errors. A separate device-mapper target would seem to be
perfect for this: like dm-integrity, only with a separate set of
"error-correcting disks" rather than expanding every sector like
dm-integrity does.

-- 
NULL && (void)



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux