Re: md device io request split

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 22 Nov 2011 10:36:34 +0100 "Ramon Schönborn" <RSchoenborn@xxxxxxx>
wrote:

> Hi,
> 
> could someone help me understand why md splits io requests in 4k blocks?
> iostat says:
> Device:	rrqm/s	wrqm/s	r/s	w/s	rMB/s	wMB/s	avgrq-sz	avgqu-sz	await	svctm	%util				
> ...
> dm-71	4.00	5895.00	31.00	7538.00	0.14	52.54	14.25	94.69	16041	0.13	96.00
> dm-96	2.00	5883.00	18.00	7670.00	0.07	52.95	14.13	104.84	13.69	0.12	96.00
> md17	0.00	0.00	48.00	13234.00	0.19	51.70	8.00	0.00	0.00	0.00	0.00
> 
> md17 is a raid1 with members "dm-71" and "dm-96". IO was generated with something like "dd if=/dev/zero bs=100k of=/dev/md17".
> According to "avgrq-sz", the average size of the requests is 8 times 512b, i.e. 4k.
> I used kernel 3.0.7 and verified the results with a raid5 and older kernel version (2.6.32) too.
> Why do i bother about this at all?
> The io requests in my case come from a virtual machine, where the requests have been merged in a virtual device. Afterwards the requests are split at md-level (vm host) and later merged again (at dm-71/dm-96). This seems to be an avoidable overhead, isn't it?

Reads to a RAID5 device should be as large as the chunk size.

Writes will always be 4K as they go through the stripe cache which uses 4K
blocks.
These 4K requests should be combined into large requests by the
elevator/scheduler at a lower level so the device should see largish writes.

Writing to a RAID5 is always going to be costly due to the need to compute
and write parity, so it isn't clear to me that this is a place were
optimisation is appropriate.


RAID1 will only limit requests to 4K if the device beneath it is
non-contiguous - e.g. a striped array or LVM arrangement were consecutive
blocks might be on different devices.
Because of the way request splitting is managed in the block layer, RAID1 is
only allowed to send down a request that will be sure to fit on a single
device.  As different devices in the RAID1 could have different alignments it
would be very complex to track exactly how each request must be split at the
top of the stack so as to fit all the way down, and I think it is impossible
to do it in a race-free way.
So if this might be the case, RAID1 insists on only receiving 1-page requests
because it knows they are always allowed to be passed down.

NeilBrown

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux