NVMe shares tagset between fabric queue and admin queue or between connect_q and NS queue, so hctx_may_queue() can be called to allocate request for these queues. Tags can be reserved in these tagset. Before error recovery, there is often lots of in-flight requests which can't be completed, and new reserved request may be needed in error recovery path. However, hctx_may_queue() can always return false because there is too many in-flight requests which can't be completed during error handling. Finally, everything can't move on. Fix this issue by always allowing reserved tag allocation in hctx_may_queue(). This ways is reasonable because reserved tag suppose to be ready any time. Cc: David Milburn <dmilburn@xxxxxxxxxx> Cc: Ewan D. Milne <emilne@xxxxxxxxxx> Signed-off-by: Ming Lei <ming.lei@xxxxxxxxxx> --- block/blk-mq-tag.c | 3 ++- block/blk-mq.c | 6 ++++-- 2 files changed, 6 insertions(+), 3 deletions(-) diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c index c31c4a0478a5..aacf10decdbd 100644 --- a/block/blk-mq-tag.c +++ b/block/blk-mq-tag.c @@ -76,7 +76,8 @@ void __blk_mq_tag_idle(struct blk_mq_hw_ctx *hctx) static int __blk_mq_get_tag(struct blk_mq_alloc_data *data, struct sbitmap_queue *bt) { - if (!data->q->elevator && !hctx_may_queue(data->hctx, bt)) + if (!data->q->elevator && !(data->flags & BLK_MQ_REQ_RESERVED) && + !hctx_may_queue(data->hctx, bt)) return BLK_MQ_NO_TAG; if (data->shallow_depth) diff --git a/block/blk-mq.c b/block/blk-mq.c index ccb500e38008..91cff275451d 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1147,15 +1147,17 @@ static bool __blk_mq_get_driver_tag(struct request *rq) struct sbitmap_queue *bt = rq->mq_hctx->tags->bitmap_tags; unsigned int tag_offset = rq->mq_hctx->tags->nr_reserved_tags; int tag; + bool reserved = blk_mq_tag_is_reserved(rq->mq_hctx->sched_tags, + rq->internal_tag); blk_mq_tag_busy(rq->mq_hctx); - if (blk_mq_tag_is_reserved(rq->mq_hctx->sched_tags, rq->internal_tag)) { + if (reserved) { bt = rq->mq_hctx->tags->breserved_tags; tag_offset = 0; } - if (!hctx_may_queue(rq->mq_hctx, bt)) + if (!reserved && !hctx_may_queue(rq->mq_hctx, bt)) return false; tag = __sbitmap_queue_get(bt); if (tag == BLK_MQ_NO_TAG) -- 2.25.2