Re: raid10 redundancy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 19/5/21 09:48, antlists wrote:
On 18/05/2021 17:05, Phillip Susi wrote:

Wols Lists writes:

When rebuilding a mirror (of any sort), one block written requires ONE
block read. When rebuilding a parity array, one block written requires
one STRIPE read.

Again, we're in agreement here.  What you keep ignoring is the fact that
both of these take the same amount of time, provided that you are IO bound.

And if you've got spinning rust, that's unlikely to be true. I can't speak for SATA, but on PATA I've personally experienced the exact opposite. Doubling the load on the interface absolutely DEMOLISHED throughput, turning what should have been a five-minute job into a several-hours job.

And if you've got many drives in your stripe, who's to say that won't overwhelm the i/o bandwidth. Your reads could be 50% or less of full speed, because there isn't the back end capacity to pass them on.

Cheers,
Wol

Jumping into this one late, but I thought the main risk was related to the fact that for every read there is a chance the device will fail to read the data successfully, and so the more data you need to read in order to restore redundancy, the greater the risk of not being able to regain redundancy.

So, assuming all drives are of equal capacity, you will need to read less data to recover a RAID10 than you would to recover a RAID5/6, thus a RAID10 has a better chance to recover.

1) speed of recovery (quicker to read 1 x drive capacity instead of n x drive capacity even if all in parallel) (unless you can sustain full read speed on all devices concurrently I guess).

2) equal load on the single "mirror" device from RAID10 compared to load on ALL devices in the RAID5/6.

3) lower impact to operational status (ie, real work load can continue without impact on all reads/writes not involving the small part of the array being recovered, equal impact for read/write that does involve this part of the array).

Right?

Regards,
Adam





[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux