Re: Is this expected RAID10 performance?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 6/7/2013 6:25 AM, Steve Bergman wrote:
> I don't have the source link handy, but it was an industry white
> paper. (I doubt you'll get it changed.)
> 
> There does seem to be some interface limitation here. Running "dd
> if=/dev/sdX of=/dev/null bs=512k" simultaneously for various
> combinations of drives gives me:

It's not a bus interface limitation.  The DMI link speed on the 5 Series
PCH is 1.25GB/s each way.

> Port   A alone : 155MByte/s
> Ports A & B   :  105MByte/s per drive
> Ports A, B & C: 105MBytes/s per drive for A & B. 155MBytess's for C.
> Ports A, B, C & D: 105MBytes/s per drive
> 
> So there's an aggregate limitation of ~1.7Gbit/s per port pair, with
> A&B and C&D making up the pairs.

This may be a quirk of the 5 Series Southbridge.  But note you're using
the standard ICH driver.  Switch to the AHCI driver and you may see some
gains here.  Also try the deadline elevator.  I mentioned it because I
intended for you to use it.  This wasn't an "optional" thing.  It will
improve performance over CFQ.  This isn't guesswork.  Everyone in Linux
storage knows this to be true.  As they all know to use noop with SSD
and hardware RAID w/[F|B]BWC.

Which kernel version and OS is this again?

-- 
Stan

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux