On Fri, Aug 21, 2020 at 08:34:48AM +0200, Christoph Hellwig wrote: > > -static void hctx_unlock(struct blk_mq_hw_ctx *hctx, int srcu_idx) > > - __releases(hctx->srcu) > > +static void hctx_unlock(struct blk_mq_hw_ctx *hctx) > > { > > if (!(hctx->flags & BLK_MQ_F_BLOCKING)) > > rcu_read_unlock(); > > else > > - srcu_read_unlock(hctx->srcu, srcu_idx); > > + percpu_ref_put(&hctx->queue->dispatch_counter); > > While you're at it: can we avoid the pointless inversion in the if > statement and just do: > > if (hctx->flags & BLK_MQ_F_BLOCKING) > percpu_ref_put(&hctx->queue->dispatch_counter); > else > rcu_read_unlock(); OK, will do that, but strictly speaking they don't belong to this patch. > > > +static inline bool hctx_lock(struct blk_mq_hw_ctx *hctx) > > { > > if (!(hctx->flags & BLK_MQ_F_BLOCKING)) { > > - /* shut up gcc false positive */ > > - *srcu_idx = 0; > > rcu_read_lock(); > > + return true; > > } else > > - *srcu_idx = srcu_read_lock(hctx->srcu); > > + return percpu_ref_tryget_live(&hctx->queue->dispatch_counter); > > } > > Same here. > > Otherwise this looks good to me, but did you do a deep audit that all > the new hctx_lock() failure cases don't cause problems? This patch just treats hctx_lock() failure as queue quiesce since queue quiesce is the only reason of the failure: 1) called for run hw queue: - __blk_mq_run_hw_queue() - blk_mq_run_hw_queue() In both two functions, blk_queue_quiesced() follows hctx_lock(), and we return immediately if queue is quiesced. hctx_lock() failure is thought as queue being quiesced, so the handling is correct. 2) called in direct issue path: - blk_mq_try_issue_directly() - blk_mq_request_issue_directly() The current handling is to add request to scheduler queue if queue is quiesced, so this patch adds the request to scheduler queue in case of hctx_lock() failure. Thanks, Ming