On 1/15/2013 6:55 AM, Peter Rabbitson wrote: > On Tue, Jan 15, 2013 at 07:49:10AM -0500, Phil Turmel wrote: >> You are neglecting each drive's need to skip over parity blocks. If the >> array's chunk size is small, the drives won't have to seek, just wait >> for the platter spin. Larger chunks might need a seek. > >> Either way, you >> won't get better than (single drive rate) * (n-2) where "n" is the >> number of drives in your array. (Large sequential reads.) > > This can't be right. As far as I know the md layer is smarter than that, and > includes various anticipatory codepaths specifically to leverage multiple > drives in this fashion. Fwiw raid5 does give me the near-expected speed > (n * single drive). It is right. You're likely confusing the "smarts" of RAID1/10 optimizations. In that case you have more than one copy of each block on more than one drive allowing for additional parallelism. With a 4 drive RAID6 you only have one copy of each block on one drive. Thus as Phil states the best performance you can get here is 2 spindles of throughput, which is why you're seeing a max of ~250MB/s for the array. Unless you plan to expand this array in the future by adding more drives and doing a reshape, I'd suggest you switch to RAID10. It will give you 3x or more write throughput with greatly reduced latency, substantially faster rebuild times, and possibly a little extra read throughput. With only 4 drives RAID6 doesn't make sense as RAID10 is superior in every way. -- Stan -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html