Odd performance observation for RAID0

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi folks:

  I built a RAID0 stripe across two large hardware RAID cards
(identical cards/driver).  What I am finding is that direct IO is
about the performance I expect (2x the single RAID card) while
buffered IO is about the same as the performance of a single RAID
card.  This is true across chunk sizes of 64 through 4096k, with an
xfs file system.

 This is a 2.6.23.14 kernel.  I tried 2.6.24.2, and there the direct
IO was the same speed as the buffered IO on one RAID card, even though
the RAID0 striped across 2 RAID cards.

  There appears to be some issue with this combination of RAID0,
buffer cache, and the driver. Breaking the raid in 2, and performing
the same tests on each RAID card results in the expected performance
during buffered io and direct IO.

  Is there something I can do to tune how the raid0 driver deals with
buffer cache?  Standard vm-tweaking yields modest changes, but I am
not convinced the issue is there.  It looks like the raid0 driver is
serializing something in buffer cache access.  Does this make sense?
For small drives and SATA drivers, I see the "expected" RAID0
behavior.  Is it possible that with very large or very fast devices
that there is some sort of buffer serialization?

  Clues and guidance are requested.  Thanks!

Joe
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux