On really fast storage it can be beneficial to delay running the request_queue to allow the elevator more opportunity to merge requests. Otherwise, it has been observed that requests are being sent to q->request_fn much quicker than is ideal on IOPS-bound backends. Signed-off-by: Mike Snitzer <snitzer@xxxxxxxxxx> --- drivers/md/dm.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 6a85bd4..8e58df5 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -1021,10 +1021,13 @@ static void end_clone_bio(struct bio *clone, int error) */ static void rq_completed(struct mapped_device *md, int rw, bool run_queue) { + int nr_requests_pending; + atomic_dec(&md->pending[rw]); /* nudge anyone waiting on suspend queue */ - if (!md_in_flight(md)) + nr_requests_pending = md_in_flight(md); + if (!nr_requests_pending) wake_up(&md->wait); /* @@ -1033,8 +1036,11 @@ static void rq_completed(struct mapped_device *md, int rw, bool run_queue) * back into ->request_fn() could deadlock attempting to grab the * queue lock again. */ - if (run_queue) - blk_run_queue_async(md->queue); + if (run_queue) { + if (!nr_requests_pending || + (nr_requests_pending >= md->queue->nr_congestion_on)) + blk_run_queue_async(md->queue); + } /* * dm_put() must be at the end of this function. See the comment above -- 1.9.3 (Apple Git-50) -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel