Re: [PATCH V10 11/11] block: deactivate hctx when the hctx is actually inactive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, May 10, 2020 at 08:30:29PM -0700, Bart Van Assche wrote:
> On 2020-05-10 19:11, Ming Lei wrote:
> > One simple solution is to pass BLK_MQ_REQ_PREEMPT to blk_get_request()
> > called in blk_mq_resubmit_rq() because at that time freezing wait won't
> > return and it is safe to allocate a new request for completing old
> > requests originated from inactive hctx.
> 
> I don't think that will help. Freezing a request queue starts with a
> call of this function:
> 
> void blk_freeze_queue_start(struct request_queue *q)
> {
> 	mutex_lock(&q->mq_freeze_lock);
> 	if (++q->mq_freeze_depth == 1) {
> 		percpu_ref_kill(&q->q_usage_counter);
> 		mutex_unlock(&q->mq_freeze_lock);
> 		if (queue_is_mq(q))
> 			blk_mq_run_hw_queues(q, false);
> 	} else {
> 		mutex_unlock(&q->mq_freeze_lock);
> 	}
> }
> 
> From blk_queue_enter():
> 
> 	const bool pm = flags & BLK_MQ_REQ_PREEMPT;
> 	[ ... ]
> 	if (percpu_ref_tryget_live(&q->q_usage_counter)) {
> 		/*
> 		 * The code that increments the pm_only counter is
> 		 * responsible for ensuring that that counter is
> 		 * globally visible before the queue is unfrozen.
> 		 */
> 		if (pm || !blk_queue_pm_only(q)) {
> 			success = true;
> 		} else {
> 			percpu_ref_put(&q->q_usage_counter);
> 		}
> 	}
> 
> In other words, setting the BLK_MQ_REQ_PREEMPT flag only makes a
> difference if blk_queue_pm_only(q) == true. Freezing a request queue
> involves calling percpu_ref_kill(&q->q_usage_counter). That causes all
> future percpu_ref_tryget_live() calls to return false until the queue
> has been unfrozen.

OK, just forgot the whole story, but the issue can be fixed quite easily
by adding a new request allocation flag in slow path, see the following
patch:

diff --git a/block/blk-core.c b/block/blk-core.c
index ec50d7e6be21..d743be1b45a2 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -418,6 +418,11 @@ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags)
 		if (success)
 			return 0;
 
+		if (flags & BLK_MQ_REQ_FORCE) {
+			percpu_ref_get(ref);
+			return 0;
+		}
+
 		if (flags & BLK_MQ_REQ_NOWAIT)
 			return -EBUSY;
 
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index c2ea0a6e5b56..2816886d0bea 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -448,6 +448,13 @@ enum {
 	BLK_MQ_REQ_INTERNAL	= (__force blk_mq_req_flags_t)(1 << 2),
 	/* set RQF_PREEMPT */
 	BLK_MQ_REQ_PREEMPT	= (__force blk_mq_req_flags_t)(1 << 3),
+
+	/*
+	 * force to allocate request and caller has to make sure queue
+	 * won't be forzen completely during allocation, and this flag
+	 * is only applied after queue freeze is started
+	 */
+	BLK_MQ_REQ_FORCE	= (__force blk_mq_req_flags_t)(1 << 4),
 };
 
 struct request *blk_mq_alloc_request(struct request_queue *q, unsigned int op,



thanks, 
Ming




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux