On Sat, May 13, 2023 at 06:12:27PM -0400, Tian Lan wrote: > From: Tian Lan <tian.lan@xxxxxxxxxxxx> > > The nr_active counter continues to increase over time which causes the > blk_mq_get_tag to hang until the thread is rescheduled to a different > core despite there are still tags available. > > kernel-stack > > INFO: task inboundIOReacto:3014879 blocked for more than 2 seconds > Not tainted 6.1.15-amd64 #1 Debian 6.1.15~debian11 > "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. > task:inboundIOReacto state:D stack:0 pid:3014879 ppid:4557 flags:0x00000000 > Call Trace: > <TASK> > __schedule+0x351/0xa20 > scheduler+0x5d/0xe0 > io_schedule+0x42/0x70 > blk_mq_get_tag+0x11a/0x2a0 > ? dequeue_task_stop+0x70/0x70 > __blk_mq_alloc_requests+0x191/0x2e0 > > kprobe output showing RQF_MQ_INFLIGHT bit is not cleared before > __blk_mq_free_request being called. > > 320 320 kworker/29:1H __blk_mq_free_request rq_flags 0x220c0 in-flight 1 > b'__blk_mq_free_request+0x1 [kernel]' > b'bt_iter+0x50 [kernel]' > b'blk_mq_queue_tag_busy_iter+0x318 [kernel]' > b'blk_mq_timeout_work+0x7c [kernel]' > b'process_one_work+0x1c4 [kernel]' > b'worker_thread+0x4d [kernel]' > b'kthread+0xe6 [kernel]' > b'ret_from_fork+0x1f [kernel]' > > Signed-off-by: Tian Lan <tian.lan@xxxxxxxxxxxx> > --- > block/blk-mq.c | 8 ++++---- > 1 file changed, 4 insertions(+), 4 deletions(-) > > diff --git a/block/blk-mq.c b/block/blk-mq.c > index f6dad0886a2f..850bfb844ed2 100644 > --- a/block/blk-mq.c > +++ b/block/blk-mq.c > @@ -683,6 +683,10 @@ static void __blk_mq_free_request(struct request *rq) > blk_crypto_free_request(rq); > blk_pm_mark_last_busy(rq); > rq->mq_hctx = NULL; > + > + if (rq->rq_flags & RQF_MQ_INFLIGHT) > + __blk_mq_dec_active_requests(hctx); > + > if (rq->tag != BLK_MQ_NO_TAG) > blk_mq_put_tag(hctx->tags, ctx, rq->tag); > if (sched_tag != BLK_MQ_NO_TAG) > @@ -694,15 +698,11 @@ static void __blk_mq_free_request(struct request *rq) > void blk_mq_free_request(struct request *rq) > { > struct request_queue *q = rq->q; > - struct blk_mq_hw_ctx *hctx = rq->mq_hctx; > > if ((rq->rq_flags & RQF_ELVPRIV) && > q->elevator->type->ops.finish_request) > q->elevator->type->ops.finish_request(rq); > > - if (rq->rq_flags & RQF_MQ_INFLIGHT) > - __blk_mq_dec_active_requests(hctx); > - > if (unlikely(laptop_mode && !blk_rq_is_passthrough(rq))) > laptop_io_completion(q->disk->bdi); This patch looks fine, but please add words about why this way fixes the issue with fixes tag: - the difference between blk_mq_free_request() and blk_mq_end_request_batch(), wrt. when to call __blk_mq_dec_active_requests(), the former does it before calling req_ref_put_and_test(), and the later decreases active request after req_ref_put_and_test(). - Fixes: f794f3351f26 ("block: add support for blk_mq_end_request_batch()") Once the above is done, feel free to add: Reviewed-by: Ming Lei <ming.lei@xxxxxxxxxx> Thanks, Ming