Re: Do not overload dispatch queue (Was: Re: IO scheduler based IO controller V10)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, 2009-10-03 at 21:49 +0200, Mike Galbraith wrote:

> It's a huge winner for sure, and there's no way to quantify.  I'm just
> afraid the other shoe will drop from what I see/hear.  I should have
> kept my trap shut and waited really, but the impression was strong.

Seems there was one "other shoe" at least.  For concurrent read vs
write, we're losing ~10% throughput that we weren't losing prior to that
last commit.  I got it back, and the concurrent git throughput back as
well with the tweak below, _seemingly_ without significant sacrifice.

cfq-iosched:  adjust async delay.

8e29675: "implement slower async initiate and queue ramp up" introduced a
throughput regression for concurrent reader vs writer.  Adjusting async delay
to use cfq_slice_async, unless someone adjusts async to have more bandwidth
allocation than sync, restored throughput.

Signed-off-by: Mike Galbraith <efault@xxxxxx>

---
 block/cfq-iosched.c |    8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

Index: linux-2.6/block/cfq-iosched.c
===================================================================
--- linux-2.6.orig/block/cfq-iosched.c
+++ linux-2.6/block/cfq-iosched.c
@@ -1343,17 +1343,19 @@ static int cfq_dispatch_requests(struct
 	 */
 	if (!cfq_cfqq_sync(cfqq) && cfqd->cfq_desktop) {
 		unsigned long last_sync = jiffies - cfqd->last_end_sync_rq;
+		unsigned long slice = max(cfq_slice_sync, cfq_slice_async);
 		unsigned int depth;
 
+		slice = min(slice, cfq_slice_async);
 		/*
 		 * must wait a bit longer
 		 */
-		if (last_sync < cfq_slice_sync) {
-			cfq_schedule_dispatch(cfqd, cfq_slice_sync - last_sync);
+		if (last_sync < slice) {
+			cfq_schedule_dispatch(cfqd, slice - last_sync);
 			return 0;
 		}
 
-		depth = last_sync / cfq_slice_sync;
+		depth = last_sync / slice;
 		if (depth < max_dispatch)
 			max_dispatch = depth;
 	}

--numbers--

dd vs konsole -e exit
 1.70     1.94     1.32     1.89     1.87    1.7     fairness=1 overload_delay=1
 1.55     1.79     1.38     1.53     1.57    1.5     desktop=1 +last_end_sync
 1.09     0.87     1.11     0.96     1.11    1.02    block-for-linus
 1.10     1.13     0.98     1.11     1.13    1.09    block-for-linus + tweak

concurrent git test
                                                 Avg
108.12   106.33    106.34    97.00    106.52   104.8  1.000 virgin
 89.81    88.82     91.56    96.57     89.38    91.2   .870 desktop=1 +last_end_sync
 92.61    94.60     92.35    93.17     94.05    93.3   .890 blk-for-linus
 89.33    88.82     89.99    88.54     89.09    89.1   .850 blk-for-linus + tweak

read vs write test

desktop=0                                Avg
elapsed        98.23    91.97   91.77    93.9  sec   1.000
30s-dd-read     48.5     49.6    49.1    49.0  mb/s  1.000
30s-dd-write    23.1     27.3    31.3    27.2        1.000
dd-read-total   49.4     50.1    49.6    49.7        1.000
dd-write-total  34.5     34.9    34.9    34.7        1.000

desktop=1 pop 8e296755                    Avg
elapsed        93.30    92.77    90.11   92.0         .979
30s-dd-read     50.5     50.4     51.8   50.9        1.038 
30s-dd-write    22.7     26.4     27.7   25.6         .941
dd-read-total   51.2     50.1     51.6   50.9        1.024 
dd-write-total  34.2     34.5     35.6   34.7        1.000

desktop=1 push 8e296755                   Avg
elapsed       104.51   104.52   101.20  103.4        1.101
30s-dd-read     43.0     43.6     44.5   43.7         .891
30s-dd-write    21.4     23.9     28.9   24.7         .908
dd-read-total   42.9     43.0     43.5   43.1         .867
dd-write-total  30.4     30.3     31.5   30.7         .884

desktop=1 push 8e296755 + tweak           Avg
elapsed        92.10    94.34    93.68   93.3         .993
30s-dd-read     49.7     49.3     48.8   49.2        1.004 
30s-dd-write    23.7     27.1     23.1   24.6         .904
dd-read-total   50.2     50.1     48.7   49.6         .997
dd-write-total  34.7     33.9     34.0   34.2         .985

#!/bin/sh

# dd if=/dev/zero of=deleteme bs=1M count=3000

echo 2 > /proc/sys/vm/drop_caches

dd if=/dev/zero of=deleteme2 bs=1M count=3000 &
dd if=deleteme of=/dev/null bs=1M count=3000 &
sleep 30
killall -q -USR1 dd &
wait
rm -f deleteme2
sync



--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel

[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux