Patch "blk-mq: always allow reserved allocation in hctx_may_queue" has been added to the 5.9-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    blk-mq: always allow reserved allocation in hctx_may_queue

to the 5.9-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     blk-mq-always-allow-reserved-allocation-in-hctx_may_.patch
and it can be found in the queue-5.9 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit bcc9e12168bd4e08081b8514157e2de7534e20b0
Author: Ming Lei <ming.lei@xxxxxxxxxx>
Date:   Fri Sep 11 18:41:14 2020 +0800

    blk-mq: always allow reserved allocation in hctx_may_queue
    
    [ Upstream commit 285008501c65a3fcee05d2c2c26cbf629ceff2f0 ]
    
    NVMe shares tagset between fabric queue and admin queue or between
    connect_q and NS queue, so hctx_may_queue() can be called to allocate
    request for these queues.
    
    Tags can be reserved in these tagset. Before error recovery, there is
    often lots of in-flight requests which can't be completed, and new
    reserved request may be needed in error recovery path. However,
    hctx_may_queue() can always return false because there is too many
    in-flight requests which can't be completed during error handling.
    Finally, nothing can proceed.
    
    Fix this issue by always allowing reserved tag allocation in
    hctx_may_queue(). This is reasonable because reserved tags are supposed
    to always be available.
    
    Reviewed-by: Christoph Hellwig <hch@xxxxxx>
    Reviewed-by: Hannes Reinecke <hare@xxxxxxx>
    Cc: David Milburn <dmilburn@xxxxxxxxxx>
    Cc: Ewan D. Milne <emilne@xxxxxxxxxx>
    Signed-off-by: Ming Lei <ming.lei@xxxxxxxxxx>
    Signed-off-by: Jens Axboe <axboe@xxxxxxxxx>
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
index 32d82e23b0953..a1c1e7c611f7b 100644
--- a/block/blk-mq-tag.c
+++ b/block/blk-mq-tag.c
@@ -59,7 +59,8 @@ void __blk_mq_tag_idle(struct blk_mq_hw_ctx *hctx)
 static int __blk_mq_get_tag(struct blk_mq_alloc_data *data,
 			    struct sbitmap_queue *bt)
 {
-	if (!data->q->elevator && !hctx_may_queue(data->hctx, bt))
+	if (!data->q->elevator && !(data->flags & BLK_MQ_REQ_RESERVED) &&
+			!hctx_may_queue(data->hctx, bt))
 		return BLK_MQ_NO_TAG;
 
 	if (data->shallow_depth)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index c27a61029cdd0..94a53d779c12b 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1105,10 +1105,11 @@ static bool __blk_mq_get_driver_tag(struct request *rq)
 	if (blk_mq_tag_is_reserved(rq->mq_hctx->sched_tags, rq->internal_tag)) {
 		bt = &rq->mq_hctx->tags->breserved_tags;
 		tag_offset = 0;
+	} else {
+		if (!hctx_may_queue(rq->mq_hctx, bt))
+			return false;
 	}
 
-	if (!hctx_may_queue(rq->mq_hctx, bt))
-		return false;
 	tag = __sbitmap_queue_get(bt);
 	if (tag == BLK_MQ_NO_TAG)
 		return false;



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux