Re: Performance Characteristics of All Linux RAIDs (mdadm/bonnie++)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Alan Cox wrote:
I really don't think that's any part of the issue, the same memory and bridge went 4-5x faster in other read cases. The truth is that the raid-1 performance is really bad, and it's the code causing it AFAIK. If you track the actual io it seems to read one drive at a time, in order, without overlap.

Make sure the readahead is set to be a fair bit over the stripe size if
you are doing bulk data tests for a single file. (Or indeed in the real
world for that specific case ;))

IIRC Justin has readahead at 16MB and chunk at 256k. I would think that if multiple devices were used at all by the md code, that the chunk rather than stripe size would be the issue. In this case the RA seems large enough to trigger good behavior, were there are available.

Note: this testing was done with an old(er) kernel, as were all of mine. Since my one large raid array has become more mission critical I'm not comfortable playing with new kernels. The fate of big, fast, and stable machines is to slide into production use. :-( I suppose that's not a bad way to do it, I now have faith in what I'm running.

--
Bill Davidsen <davidsen@xxxxxxx>
 "Woe unto the statesman who makes war without a reason that will still
be valid when the war is over..." Otto von Bismark

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux