Since the next patch in this series will use RCU to iterate over tag_list, make this safe. Note: call_rcu() is already used to free the request queue. >From blk-sysfs.c: call_rcu(&q->rcu_head, blk_free_queue_rcu); See also: * Commit 705cda97ee3a ("blk-mq: Make it safe to use RCU to iterate over blk_mq_tag_set.tag_list"; v4.12). * Commit 08c875cbf481 ("block: Use non _rcu version of list functions for tag_set_list"; v5.9). Reviewed-by: Khazhismel Kumykov <khazhy@xxxxxxxxxx> Tested-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@xxxxxxx> Cc: Christoph Hellwig <hch@xxxxxx> Cc: Martin K. Petersen <martin.petersen@xxxxxxxxxx> Cc: Shin'ichiro Kawasaki <shinichiro.kawasaki@xxxxxxx> Cc: Ming Lei <ming.lei@xxxxxxxxxx> Cc: Hannes Reinecke <hare@xxxxxxx> Cc: Johannes Thumshirn <johannes.thumshirn@xxxxxxx> Cc: John Garry <john.garry@xxxxxxxxxx> Cc: Daniel Wagner <dwagner@xxxxxxx> Signed-off-by: Bart Van Assche <bvanassche@xxxxxxx> --- block/blk-mq.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 8b59f6b4ec8e..7d2ea6357c7d 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2947,7 +2947,7 @@ static void blk_mq_del_queue_tag_set(struct request_queue *q) struct blk_mq_tag_set *set = q->tag_set; mutex_lock(&set->tag_list_lock); - list_del(&q->tag_set_list); + list_del_rcu(&q->tag_set_list); if (list_is_singular(&set->tag_list)) { /* just transitioned to unshared */ set->flags &= ~BLK_MQ_F_TAG_QUEUE_SHARED; @@ -2955,7 +2955,11 @@ static void blk_mq_del_queue_tag_set(struct request_queue *q) blk_mq_update_tag_set_shared(set, false); } mutex_unlock(&set->tag_list_lock); - INIT_LIST_HEAD(&q->tag_set_list); + /* + * Calling synchronize_rcu() and INIT_LIST_HEAD(&q->tag_set_list) is + * not necessary since blk_mq_del_queue_tag_set() is only called from + * blk_cleanup_queue(). + */ } static void blk_mq_add_queue_tag_set(struct blk_mq_tag_set *set,