Re: Software RAID checksum performance on 24 disks not even close to kernel reported

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 6/6/2012 11:09 AM, Dan Williams wrote:

> Hardware raid ultimately does the same shuffling, outside of nvram an
> advantage it has is that parity data does not traverse the bus...

Are you referring to the host data bus(s)?  I.e. HT/QPI and PCIe?

With a 24 disk array, a full stripe write is only 1/12th parity data,
less than 10%.  And the buses (point to point actually) of 24 drive
caliber systems will usually start at one way B/W of 4GB/s for PCIe 2.0
x8 and with one way B/W from the PCIe controller to the CPU starting at
10.4GB/s for AMD HT 3.0 systems.  PCIe x8 is plenty to handle a 24 drive
md RAID 6, using 7.2K SATA drives anyway.

What is a bigger issue, and may actually be what you were referring to,
is read-modify-write B/W, which will incur a full stripe read and write.
 For RMW heavy workloads, this is significant.  HBA RAID does have a big
advantage here, compared to one's md array possessing the aggregate
performance to saturate the PCIe bus.

-- 
Stan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux