Re: raid5 that used parity for reads only when degraded

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thursday March 23, aizvorski@xxxxxxxxx wrote:
> Neil - Thank you very much for the response.  
> 
> In my tests with identically configured raid0 and raid5 arrays, raid5
> initially had much lower throughput during reads.  I had assumed that
> was because raid5 did parity-checking all the time.  It turns out that
> raid5 throughput can get fairly close to raid0 throughput
> if /sys/block/md0/md/stripe_cache_size is set to a very high value,
> 8192-16384.  However the cpu load is still very much higher during raid5
> reads.  I'm not sure why?

Probably all the memcpys.
For a raid5 read, the data is DMAed from the device into the
stripe_cache, and then memcpy is used to move it to the filesystem (or
other client) buffer.  Worse: this memcpy happens on only one CPU so a
multiprocessor won't make it go any after.

I would be possible to bypass the stripe_cache for reads from a
non-degraded array (I did it for 2.4) but it is somewhat more complex
in 2.6 and I haven't attempted it yet (there have always been other
more interesting things to do).

To test is this is the problem you could probably just comment-out the
memcpy (the copy_data in handle_stripe) and see if the reads go
faster.  Obviously you will be getting garbage back, but it should
give you a reasonably realistic measure of the cost.

NeilBrown
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux