Re: Triple parity and beyond

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 21 Nov 2013 16:57:48 -0600 Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
wrote:

> On 11/21/2013 1:05 AM, John Williams wrote:
> > On Wed, Nov 20, 2013 at 10:52 PM, Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx> wrote:
> >> On 11/20/2013 8:46 PM, John Williams wrote:
> >>> For myself or any machines I managed for work that do not need high
> >>> IOPS, I would definitely choose triple- or quad-parity over RAID 51 or
> >>> similar schemes with arrays of 16 - 32 drives.
> >>
> >> You must see a week long rebuild as acceptable...
> > 
> > It would not be a problem if it did take that long, since I would have
> > extra parity units as backup in case of a failure during a rebuild.
> > 
> > But of course it would not take that long. Take, for example, a 24 x
> > 3TB triple-parity array (21+3) that has had two drive failures
> > (perhaps the rebuild started with one failure, but there was soon
> > another failure). I would expect the rebuild to take about a day.
> 
> You're looking at today.  We're discussing tomorrow's needs.  Today's
> 6TB 3.5" drives have sustained average throughput of ~175MB/s.
> Tomorrow's 20TB drives will be lucky to do 300MB/s.  As I said
> previously, at that rate a straight disk-disk copy of a 20TB drive takes
> 18.6 hours.  This is what you get with RAID1/10/51.  In the real world,
> rebuilding a failed drive in a 3P array of say 8 of these disks will
> likely take at least 3 times as long, 2 days 6 hours minimum, probably
> more.  This may be perfectly acceptable to some, but probably not to all.

Could you explain your logic here?  Why do you think rebuilding parity
will take 3 times as long as rebuilding a copy?  Can you measure that sort of
difference today?

Presumably when we have 20TB drives we will also have more cores and quite
possibly dedicated co-processors which will make the CPU load less
significant.

NeilBrown

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux