On 2023/8/17 22:50, Bart Van Assche wrote: > On 8/17/23 07:41, kernel test robot wrote: >> [ 222.622837][ T2216] statistics for priority 1: i 276 m 0 d 276 c 278 >> [ 222.629307][ T2216] WARNING: CPU: 0 PID: 2216 at block/mq-deadline.c:680 dd_exit_sched (block/mq-deadline.c:680 (discriminator 3)) > > The above information shows that dd_inserted_request() has been called > 276 times and also that dd_finish_request() has been called 278 times. Thanks much for your help. This patch indeed introduced a regression, postflush requests will be completed twice, so here dd_finish_request() is more than dd_inserted_request(). diff --git a/block/blk-mq.c b/block/blk-mq.c index a8c63bef8ff1..7cd47ffc04ce 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -686,8 +686,10 @@ static void blk_mq_finish_request(struct request *rq) { struct request_queue *q = rq->q; - if (rq->rq_flags & RQF_USE_SCHED) + if (rq->rq_flags & RQF_USE_SCHED) { q->elevator->type->ops.finish_request(rq); + rq->rq_flags &= ~RQF_USE_SCHED; + } } Clear RQF_USE_SCHED flag here should fix this problem, which should be ok since finish_request() is the last callback, this flag isn't needed anymore. Jens, should I send this diff as another patch or resend updated v3? Thanks. > Calling dd_finish_request() more than once per request breaks the code > for priority handling since that code checks how many requests are > pending per priority level by subtracting the number of completion calls > from the number of insertion calls (see also dd_queued()). I think the > above output indicates that this patch introduced a regression. > > Bart. >