This is a note to let you know that I've just added the patch titled blk-throttle: fix lockdep warning of "cgroup_mutex or RCU read lock required!" to the 5.10-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: blk-throttle-fix-lockdep-warning-of-cgroup_mutex-or-.patch and it can be found in the queue-5.10 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. commit 18d5b4908b5c3c7af76892e09038d2b560e220eb Author: Ming Lei <ming.lei@xxxxxxxxxx> Date: Fri Nov 17 10:35:22 2023 +0800 blk-throttle: fix lockdep warning of "cgroup_mutex or RCU read lock required!" [ Upstream commit 27b13e209ddca5979847a1b57890e0372c1edcee ] Inside blkg_for_each_descendant_pre(), both css_for_each_descendant_pre() and blkg_lookup() requires RCU read lock, and either cgroup_assert_mutex_or_rcu_locked() or rcu_read_lock_held() is called. Fix the warning by adding rcu read lock. Reported-by: Changhui Zhong <czhong@xxxxxxxxxx> Signed-off-by: Ming Lei <ming.lei@xxxxxxxxxx> Link: https://lore.kernel.org/r/20231117023527.3188627-2-ming.lei@xxxxxxxxxx Signed-off-by: Jens Axboe <axboe@xxxxxxxxx> Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> diff --git a/block/blk-throttle.c b/block/blk-throttle.c index c526fdd0a7b90..4bf514a7bd82c 100644 --- a/block/blk-throttle.c +++ b/block/blk-throttle.c @@ -1409,6 +1409,7 @@ static void tg_conf_updated(struct throtl_grp *tg, bool global) tg_bps_limit(tg, READ), tg_bps_limit(tg, WRITE), tg_iops_limit(tg, READ), tg_iops_limit(tg, WRITE)); + rcu_read_lock(); /* * Update has_rules[] flags for the updated tg's subtree. A tg is * considered to have rules if either the tg itself or any of its @@ -1436,6 +1437,7 @@ static void tg_conf_updated(struct throtl_grp *tg, bool global) this_tg->latency_target = max(this_tg->latency_target, parent_tg->latency_target); } + rcu_read_unlock(); /* * We're already holding queue_lock and know @tg is valid. Let's