On Thursday March 23, aizvorski@xxxxxxxxx wrote: > Neil - Thank you very much for the response. > > In my tests with identically configured raid0 and raid5 arrays, raid5 > initially had much lower throughput during reads. I had assumed that > was because raid5 did parity-checking all the time. It turns out that > raid5 throughput can get fairly close to raid0 throughput > if /sys/block/md0/md/stripe_cache_size is set to a very high value, > 8192-16384. However the cpu load is still very much higher during raid5 > reads. I'm not sure why? Probably all the memcpys. For a raid5 read, the data is DMAed from the device into the stripe_cache, and then memcpy is used to move it to the filesystem (or other client) buffer. Worse: this memcpy happens on only one CPU so a multiprocessor won't make it go any after. I would be possible to bypass the stripe_cache for reads from a non-degraded array (I did it for 2.4) but it is somewhat more complex in 2.6 and I haven't attempted it yet (there have always been other more interesting things to do). To test is this is the problem you could probably just comment-out the memcpy (the copy_data in handle_stripe) and see if the reads go faster. Obviously you will be getting garbage back, but it should give you a reasonably realistic measure of the cost. NeilBrown - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html