Re: [PATCH V10 11/11] block: deactivate hctx when the hctx is actually inactive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2020-05-10 19:11, Ming Lei wrote:
> One simple solution is to pass BLK_MQ_REQ_PREEMPT to blk_get_request()
> called in blk_mq_resubmit_rq() because at that time freezing wait won't
> return and it is safe to allocate a new request for completing old
> requests originated from inactive hctx.

I don't think that will help. Freezing a request queue starts with a
call of this function:

void blk_freeze_queue_start(struct request_queue *q)
{
	mutex_lock(&q->mq_freeze_lock);
	if (++q->mq_freeze_depth == 1) {
		percpu_ref_kill(&q->q_usage_counter);
		mutex_unlock(&q->mq_freeze_lock);
		if (queue_is_mq(q))
			blk_mq_run_hw_queues(q, false);
	} else {
		mutex_unlock(&q->mq_freeze_lock);
	}
}

>From blk_queue_enter():

	const bool pm = flags & BLK_MQ_REQ_PREEMPT;
	[ ... ]
	if (percpu_ref_tryget_live(&q->q_usage_counter)) {
		/*
		 * The code that increments the pm_only counter is
		 * responsible for ensuring that that counter is
		 * globally visible before the queue is unfrozen.
		 */
		if (pm || !blk_queue_pm_only(q)) {
			success = true;
		} else {
			percpu_ref_put(&q->q_usage_counter);
		}
	}

In other words, setting the BLK_MQ_REQ_PREEMPT flag only makes a
difference if blk_queue_pm_only(q) == true. Freezing a request queue
involves calling percpu_ref_kill(&q->q_usage_counter). That causes all
future percpu_ref_tryget_live() calls to return false until the queue
has been unfrozen.

Bart.



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux