Re: Optimizing small IO with md RAID

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 30/05/2011 13:57, John Robinson wrote:
On 30/05/2011 12:20, David Brown wrote:
(This is in addition to what Stan said about filesystems, etc.)
[...]
Try your measurements with a raid10,far setup. It costs more on data
space, but should, I think, be quite a bit faster.

I'd also be interested in what performance is like with RAID60, e.g. 4
6-drive RAID6 sets, combined into one RAID0. I suggest this arrangement
because it gives slightly better data space (33% better than the RAID10
arrangement), better redundancy (if that's a consideration[1]), and
would keep all your stripe widths in powers of two, e.g. 64K chunk on
the RAID6s would give a 256K stripe width and end up with an overall
stripe width of 1M at the RAID0.


Power-of-two stripe widths may be better for xfs than non-power-of-two widths - perhaps Stan can answer that (he seems to know lots about xfs on raid). But you have to be careful when testing and benchmarking - with power-of-two stripe widths, it's easy to get great 4 MB performance but terrible 5 MB performance.


As for the redundancy of raid6 (or 60) vs. raid10, the redundancy is different but not necessarily better, depending on your failure types and requirements. raid6 will tolerate any two drives failing, while raid10 will tolerate up to half the drives failing as long as you don't lose both halves of a pair. Depending on the chances of a random disk failing, if you have enough disks then the chances of two disks in a pair failing are less than the chances of three disks in a raid6 setup failing. And raid10 suffers much less from running in degraded mode than raid6, and recovery is faster and less stressful. So which is "better" depends on the user.

Of course, there is no question about the differences in space efficiency - that's easy to calculate.

For greater paranoia, you can always go for raid15 or even raid16...

You will probably always have relatively poor small write performance
with any parity RAID for reasons both David and Stan already pointed
out, though the above might be the least worst, if you see what I mean.

You could also try 3 8-drive RAID6s or 2 12-drive RAID6s but you'd
definitely have to be careful - as Stan says - with your filesystem
configuration because of the stripe widths, and the bigger your parity
RAIDs the worse your small write and degraded performance becomes.

Cheers,

John.

[1] RAID6 lets you get away with sector errors while rebuilding after a
disc failure. In addition, as it happens, setting up this arrangement
with two drives on each controller for each of the RAID6s would mean you
could tolerate a controller failure, albeit with horrible performance
and you would have no redundancy left. You could configure smaller
RAID6s or RAID10 to tolerate a controller failure too.


--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux