Re: Triple parity and beyond

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/21/2013 1:05 AM, John Williams wrote:
> On Wed, Nov 20, 2013 at 10:52 PM, Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx> wrote:
>> On 11/20/2013 8:46 PM, John Williams wrote:
>>> For myself or any machines I managed for work that do not need high
>>> IOPS, I would definitely choose triple- or quad-parity over RAID 51 or
>>> similar schemes with arrays of 16 - 32 drives.
>>
>> You must see a week long rebuild as acceptable...
> 
> It would not be a problem if it did take that long, since I would have
> extra parity units as backup in case of a failure during a rebuild.
> 
> But of course it would not take that long. Take, for example, a 24 x
> 3TB triple-parity array (21+3) that has had two drive failures
> (perhaps the rebuild started with one failure, but there was soon
> another failure). I would expect the rebuild to take about a day.

You're looking at today.  We're discussing tomorrow's needs.  Today's
6TB 3.5" drives have sustained average throughput of ~175MB/s.
Tomorrow's 20TB drives will be lucky to do 300MB/s.  As I said
previously, at that rate a straight disk-disk copy of a 20TB drive takes
18.6 hours.  This is what you get with RAID1/10/51.  In the real world,
rebuilding a failed drive in a 3P array of say 8 of these disks will
likely take at least 3 times as long, 2 days 6 hours minimum, probably
more.  This may be perfectly acceptable to some, but probably not to all.

>>> on a subject Adam Leventhal has already
>>> covered in detail in an article "Triple-Parity RAID and Beyond" which
>>> seems to match the subject of this thread quite nicely:
>>>
>>> http://queue.acm.org/detail.cfm?id=1670144
>>
>> Mr. Leventhal did not address the overwhelming problem we face, which is
>> (multiple) parity array reconstruction time.  He assumes the time to
>> simply 'populate' one drive at its max throughput is the total
>> reconstruction time for the array.
> 
> Since Adam wrote the code for RAID-Z3 for ZFS, I'm sure he is aware of
> the time to restore data to failed drives. I do not see any flaw in
> his analysis related to the time needed to restore data to failed
> drives.

He wrote that article in late 2009.  It seems pretty clear he wasn't
looking 10 years forward to 20TB drives, where the minimum mirror
rebuild time will be ~18 hours, and parity rebuild will be much greater.

-- 
Stan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux