[PATCH/RFC/RFT] md: allow resync to go faster when there is competing IO.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,
 as you probably know, when md is doing resync and notices other IO it
 throttles the resync to a configured "minimum", which defaults to
 1MB/sec/device.

 On a lot of modern devices, that is extremely slow.

 I don't want to change the default (not all drives are the same) so I
 wanted to come up with something that it a little bit dynamic.

 After a bit of pondering and a bit of trial and error, I have the following.
 It sometimes does what I want.  I don't think it is ever really bad.

 I'd appreciate it if people could test it on different hardware, different
 configs, different loads.

 What I have been doing is running
  while :; do cat /sys/block/md0/md/sync_speed; sleep 5; 
  done > /root/some-file

 while a resync is happening and a load is being imposed.

 I do this with the old kernel and with this patch applied, then use
 gnuplot to look at the sync_speed graphs.

 I'd like to see that the new code is never slower than the old, and rarely more
 than 20% of the available throughput when there is significant load.

 Any test results or other observations most welcome,

Thanks,
NeilBrown



When md notices non-sync IO happening while it is trying
to resync (or reshape or recover) it slows down to the
set minimum.

The default minimum might have made sense many years ago
but the drives have become faster.  Changing the default
to match the times isn't really a long term solution.

This patch changes the code so that instead of waiting until the speed
has dropped to the target, it just waits until pending requests
have completed, and then waits about as long again.
This means that the delay inserted is a function of the speed
of the devices.

Test show that:
 - for some loads, the resync speed is unchanged.  For those loads
   increasing the minimum doesn't change the speed either.
   So this is a good result.  To increase resync speed under such
   loads we would probably need to increase the resync window
   size.

 - for other loads, resync speed does increase to a reasonable
   fraction (e.g. 20%) of maximum possible, and throughput of
   the load only drops a little bit (e.g. 10%)

 - for other loads, throughput of the non-sync load drops quite a bit
   more.  These seem to be latency-sensitive loads.

So it isn't a perfect solution, but it is mostly an improvement.

Signed-off-by: NeilBrown <neilb@xxxxxxx>

diff --git a/drivers/md/md.c b/drivers/md/md.c
index 94741ee6ae69..ce6624b3cc1b 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -7669,11 +7669,20 @@ void md_do_sync(struct md_thread *thread)
 			/((jiffies-mddev->resync_mark)/HZ +1) +1;
 
 		if (currspeed > speed_min(mddev)) {
-			if ((currspeed > speed_max(mddev)) ||
-					!is_mddev_idle(mddev, 0)) {
+			if (currspeed > speed_max(mddev)) {
 				msleep(500);
 				goto repeat;
 			}
+			if (!is_mddev_idle(mddev, 0)) {
+				/*
+				 * Give other IO more of a chance.
+				 * The faster the devices, the less we wait.
+				 */
+				unsigned long start = jiffies;
+				wait_event(mddev->recovery_wait,
+					   !atomic_read(&mddev->recovery_active));
+				schedule_timeout_uninterruptible(jiffies-start);
+			}
 		}
 	}
 	printk(KERN_INFO "md: %s: %s %s.\n",mdname(mddev), desc,

Attachment: pgpwy7tFf4YtA.pgp
Description: OpenPGP digital signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux