This is a note to let you know that I've just added the patch titled block: Skip destroyed blkg when restart in blkg_destroy_all() to the 6.2-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: block-skip-destroyed-blkg-when-restart-in-blkg_destr.patch and it can be found in the queue-6.2 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. commit eaeb212b688ff5e2e98acc4f5f6c5a9be7723230 Author: Tao Su <tao1.su@xxxxxxxxxxxxxxx> Date: Fri Apr 28 12:51:49 2023 +0800 block: Skip destroyed blkg when restart in blkg_destroy_all() [ Upstream commit 8176080d59e6d4ff9fc97ae534063073b4f7a715 ] Kernel hang in blkg_destroy_all() when total blkg greater than BLKG_DESTROY_BATCH_SIZE, because of not removing destroyed blkg in blkg_list. So the size of blkg_list is same after destroying a batch of blkg, and the infinite 'restart' occurs. Since blkg should stay on the queue list until blkg_free_workfn(), skip destroyed blkg when restart a new round, which will solve this kernel hang issue and satisfy the previous will to restart. Reported-by: Xiangfei Ma <xiangfeix.ma@xxxxxxxxx> Tested-by: Xiangfei Ma <xiangfeix.ma@xxxxxxxxx> Tested-by: Farrah Chen <farrah.chen@xxxxxxxxx> Signed-off-by: Tao Su <tao1.su@xxxxxxxxxxxxxxx> Fixes: f1c006f1c685 ("blk-cgroup: synchronize pd_free_fn() from blkg_free_workfn() and blkcg_deactivate_policy()") Suggested-and-reviewed-by: Yu Kuai <yukuai3@xxxxxxxxxx> Link: https://lore.kernel.org/r/20230428045149.1310073-1-tao1.su@xxxxxxxxxxxxxxx Signed-off-by: Jens Axboe <axboe@xxxxxxxxx> Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c index 9ac1efb053e08..2d8a28e4e22f7 100644 --- a/block/blk-cgroup.c +++ b/block/blk-cgroup.c @@ -501,6 +501,9 @@ static void blkg_destroy_all(struct gendisk *disk) list_for_each_entry_safe(blkg, n, &q->blkg_list, q_node) { struct blkcg *blkcg = blkg->blkcg; + if (hlist_unhashed(&blkg->blkcg_node)) + continue; + spin_lock(&blkcg->lock); blkg_destroy(blkg); spin_unlock(&blkcg->lock);