Re: Is this enough for us to have triple-parity RAID?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 4/17/2012 1:11 AM, Alex wrote:
> Thanks to Billy Crook who pointed out this is the right place for my post.
> 
> Adam Leventhal integrated triple-parity RAID into ZFS in 2009. The
> necessity of triple-parity RAID is described in detail in Adam
> Leventhal's article(http://cacm.acm.org/magazines/2010/1/55741-triple-parity-raid-and-beyond/fulltext).

No mention of SSD.

> al.(http://www.nature.com/ncomms/journal/v3/n2/full/ncomms1666.html)

Pay wall.  No matter, as I'd already read of this research.

> established a revolutionary way of writing magnetic substrate using a
> heat pulse instead of a traditional magnetic field, which may increase
> data throughput on a hard disk by 1000 times in the future.

Your statement is massively misleading.  The laser heating technology
doesn't independently increase throughput 1000x.  It will allow for
increased throughput only via enabling greater aerial density.  Thus the
ratio of throughput to capacity stays the same.  Thus drive rebuild
times will still increase dramatically.

> facilitate another triple-parity RAID algorithm

CPU performance is increasing at a faster rate than any computer
technology.  Thus, if you're going to even bother with introducing
another parity RAID level, and the binary will run on host CPU cores,
skip triple parity and go straight to quad parity, RAID-P4™.  Most savvy
folks doing RAID6 are using a 6+2 or 8+2 configuration as wide stripe
parity arrays tend to be problematic.  They then stripe them to create a
RAID60, or concatenate them if they're even more savvy and use XFS.

The most common JBOD chassis on the market today seems to be the 24x
2.5" drive layout.  This allows three 6+2 RAID6 arrays, losing 6 drives
to parity leaving 18 drives of capacity.  With RAID-P4™  a wider stripe
array becomes more attractive for some applications.  Thus our 24 drive
JBOD could yield a 20+4 RAID-P4™ with two drives more capacity than the
6+2 RAID6 configuration.  If one wished to stick with narrower stripes,
we'd get two 8+4 RAID-P4™ arrays and 16 drives total capacity, 2 less
than the triple RAID6 setup, and still 4 drives more capacity than RAID10.

The really attractive option here for people who like parity RAID is the
20+4 possibility.  With a RAID-P4™ array that can withstand up to 4
drive failures, people will no longer be afraid of using wide stripes
for applications that typically benefit, where RAID50/60 would have been
employed previously.  They also no longer have to worry about secondary
and/or tertiary drive failures during a rebuild.

Yeah, definitely go straight to RAID-P4™ and skip triple parity RAID
altogether.  You'll have to do it in 6-10 years anyway so may as well
prevent the extra work.  And people could definitely benefit from
RAID-P4™ today.

-- 
Stan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux