[PATCH V12 10/12] block: add request allocation flag of BLK_MQ_REQ_FORCE

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>From 1151afd4c11997c2769c385586097bf4f1cf60ce Mon Sep 17 00:00:00 2001
From: Ming Lei <ming.lei@xxxxxxxxxx>
Date: Mon, 11 May 2020 15:43:28 +0800
Subject: [PATCH V12 10/12] block: add request allocation flag of
 BLK_MQ_REQ_FORCE

When one hctx becomes inactive, there may be requests allocated from
this hctx, we can't queue them to the inactive hctx, one approach is
to re-submit them via one active hctx.

However, the request queue may have been started to freeze, and allocating
request becomes not possible. Add flag of BLK_MQ_REQ_FORCE to allow block
layer to allocate request in this case becasue the queue won't be frozen
completely before all requests allocated from inactive hctx are completed.

The similar approach has been applied in commit 8dc765d438f1 ("SCSI: fix queue
cleanup race before queue initialization is done").

This way can help on other request dependency case too, such as, storage
device side has some problem, and IO request can't be queued to device
successfully, and passthrough request is required to fix the device problem.
If queue freeze just comes before allocating passthrough request, hang will be
triggered in queue freeze process, IO process and context for allocating
passthrough request forever. See commit 01e99aeca397 ("blk-mq: insert passthrough
request into hctx->dispatch directly") for background of this kind of issue.

Cc: John Garry <john.garry@xxxxxxxxxx>
Cc: Bart Van Assche <bvanassche@xxxxxxx>
Cc: Hannes Reinecke <hare@xxxxxxxx>
Cc: Christoph Hellwig <hch@xxxxxx>
Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Signed-off-by: Ming Lei <ming.lei@xxxxxxxxxx>
---
V12:
	- one line change for not warning on BLK_MQ_REQ_FORCE

 block/blk-core.c       | 8 +++++++-
 include/linux/blk-mq.h | 7 +++++++
 2 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index ffb1579fd4da..c4e306f0e6fd 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -430,6 +430,11 @@ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags)
 		if (success)
 			return 0;
 
+		if (flags & BLK_MQ_REQ_FORCE) {
+			percpu_ref_get(&q->q_usage_counter);
+			return 0;
+		}
+
 		if (flags & BLK_MQ_REQ_NOWAIT)
 			return -EBUSY;
 
@@ -617,7 +622,8 @@ struct request *blk_get_request(struct request_queue *q, unsigned int op,
 	struct request *req;
 
 	WARN_ON_ONCE(op & REQ_NOWAIT);
-	WARN_ON_ONCE(flags & ~(BLK_MQ_REQ_NOWAIT | BLK_MQ_REQ_PREEMPT));
+	WARN_ON_ONCE(flags & ~(BLK_MQ_REQ_NOWAIT | BLK_MQ_REQ_PREEMPT |
+				BLK_MQ_REQ_FORCE));
 
 	req = blk_mq_alloc_request(q, op, flags);
 	if (!IS_ERR(req) && q->mq_ops->initialize_rq_fn)
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index c2ea0a6e5b56..7d7aa5305a67 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -448,6 +448,13 @@ enum {
 	BLK_MQ_REQ_INTERNAL	= (__force blk_mq_req_flags_t)(1 << 2),
 	/* set RQF_PREEMPT */
 	BLK_MQ_REQ_PREEMPT	= (__force blk_mq_req_flags_t)(1 << 3),
+
+	/*
+	 * force to allocate request and caller has to make sure queue
+	 * won't be frozen completely during allocation, and this flag
+	 * is only applied after queue freeze is started
+	 */
+	BLK_MQ_REQ_FORCE	= (__force blk_mq_req_flags_t)(1 << 4),
 };
 
 struct request *blk_mq_alloc_request(struct request_queue *q, unsigned int op,
-- 
2.25.2




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux