On 10/08/2020 17:51, Kashyap Desai wrote:
tx context.
Hannes/John - We need one more correction for below patch -
https://github.com/hisilicon/kernel-dev/commit/ff631eb80aa0449eaeb78a282fd7eff2a9e42f77
I noticed - that elevator_queued count goes negative mainly because there
are some case where IO was submitted from dispatch queue(not scheduler
queue) and request still has "RQF_ELVPRIV" flag set.
In such cases " dd_finish_request" is called without " dd_insert_request". I
think it is better to decrement counter once it is taken out from dispatched
queue. (Ming proposed to use dispatch path for decrementing counter, but I
somehow did not accounted assuming RQF_ELVPRIV will be set only if IO is
submitted from scheduler queue.)
Below is additional change. Can you merge this ?
diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index 9d75374..bc413dd 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -385,6 +385,8 @@ static struct request *dd_dispatch_request(struct
blk_mq_hw_ctx *hctx)
spin_lock(&dd->lock);
rq = __dd_dispatch_request(dd);
+ if (rq)
+ atomic_dec(&rq->mq_hctx->elevator_queued);
Is there any reason why this operation could not be taken outside the
spinlock? I assume raciness is not a problem with this patch...
spin_unlock(&dd->lock);
return rq;
@@ -574,7 +576,6 @@ static void dd_finish_request(struct request *rq)
blk_mq_sched_mark_restart_hctx(rq->mq_hctx);
spin_unlock_irqrestore(&dd->zone_lock, flags);
}
- atomic_dec(&rq->mq_hctx->elevator_queued);
}
static bool dd_has_work(struct blk_mq_hw_ctx *hctx)
--
2.9.5
Kashyap
.#
btw, can you provide signed-off-by if you want credit upgraded to
Co-developed-by?
Thanks,
john