Re: MD performance options: More CPU’s or more Hz’s?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



mark delfman wrote:
Hi... I am wondering if anyone can offer some advice on MD performance
related to CPU (speed and or cores).  The basic question (probably too
basic) is “for more MD performance are you better with more cpu’s or a
faster single cpu”.  (in an ideal world we would have lots of very
fast CPUs, but given we never have enough money....).

Is there any grounding to the following logic:

Presuming that a RAID0 will deliver 1.5GBsec and a RAID6 circa
700MBsec, I am guessing there are many complex reasons for the
difference, but one of the more obvious being the need for the CPU to
perform all the necessary R6 overheads.

If we look at a single RAID 6 configuration, then I am guessing if we
increase the speed of the CPU from eg 2.0GHz to 2.8GHz (quad core
xeon) then the RAID6 calculations would be faster?  Would other
overheads also be faster and if so is there any know relationship
between CPU Hz and MD performance (maybe even rough rule of thumb eg
double cpu Hz and increase R6 performance by 20% etc)

If however I start to think of multiple RAID6 configurations maybe via
iSCSI etc... then I wonder if MD would be better served with more CPUs
instead... for example 2 x Quad core 2.0GHz xeons instead of 1 x 2.8.
  This theory is dependent on linux / md effectively parallel
processing the overheads and I have no knowledge in this area... hence
the question.

Any thoughts anyone?

Your logic is correct, but it implies that you expect "faster calculation" to mean "faster write performance," and that is usually true only at very low or very high write loads.

Very low, because you get the io queued a few ns faster. Since the disk still has to do the write, this is essentially meaningless.

Very high, because with many drives and a huge write volume you could, in theory, start having CPU issues.

I suggest that before you worry over much on that, you look at CPU usage at idle and then at gradually increasing write load, and look at system time vs. GB/sec to see if you are actually getting anywhere near the limit, or even up enough to notice. In my look at this a few years ago I didn't see any issues, but that was with only eight drives in the array. Measurement is always good, but in general drive performance is the limiting factor rather than CPU.

You didn't ask: if you use ext[34] filesystems, there is a gain to be had from tuning the stripe and stride parameters, at least for large sequential io. My measurements were on 2.6.26, so are out of date, but less head motion is always better.

Others may have more experience, other than load testing the array has never been stressed, performance of backup servers is less important than reliability.

--
Bill Davidsen <davidsen@xxxxxxx>
 Unintended results are the well-earned reward for incompetence.


--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux