Set nvme-loop's lock class via blk_mq_hctx_set_fq_lock_class for avoiding lockdep possible recursive locking, then we can remove the dynamically allocated lock class for each flush queue, finally we can avoid horrible SCSI probe delay. This way may not address situation in which one nvme-loop is backed on another nvme-loop. However, in reality, people seldom uses this way for test. Even though someone played in this way, it is just one recursive locking false positive, no real deadlock issue. Tested-by: Kashyap Desai <kashyap.desai@xxxxxxxxxxxx> Reported-by: Qian Cai <cai@xxxxxxxxxx> Reviewed-by: Christoph Hellwig <hch@xxxxxx> Cc: Sumit Saxena <sumit.saxena@xxxxxxxxxxxx> Cc: John Garry <john.garry@xxxxxxxxxx> Cc: Kashyap Desai <kashyap.desai@xxxxxxxxxxxx> Cc: Bart Van Assche <bvanassche@xxxxxxx> Cc: Hannes Reinecke <hare@xxxxxxx> Signed-off-by: Ming Lei <ming.lei@xxxxxxxxxx> --- drivers/nvme/target/loop.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c index f6d81239be21..07806016c09d 100644 --- a/drivers/nvme/target/loop.c +++ b/drivers/nvme/target/loop.c @@ -211,6 +211,8 @@ static int nvme_loop_init_request(struct blk_mq_tag_set *set, (set == &ctrl->tag_set) ? hctx_idx + 1 : 0); } +static struct lock_class_key loop_hctx_fq_lock_key; + static int nvme_loop_init_hctx(struct blk_mq_hw_ctx *hctx, void *data, unsigned int hctx_idx) { @@ -219,6 +221,14 @@ static int nvme_loop_init_hctx(struct blk_mq_hw_ctx *hctx, void *data, BUG_ON(hctx_idx >= ctrl->ctrl.queue_count); + /* + * flush_end_io() can be called recursively for us, so use our own + * lock class key for avoiding lockdep possible recursive locking, + * then we can remove the dynamically allocated lock class for each + * flush queue, that way may cause horrible boot delay. + */ + blk_mq_hctx_set_fq_lock_class(hctx, &loop_hctx_fq_lock_key); + hctx->driver_data = queue; return 0; } -- 2.28.0