This is a note to let you know that I've just added the patch titled blk-throttle: fix lockdep warning of "cgroup_mutex or RCU read lock required!" to the 6.6-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: blk-throttle-fix-lockdep-warning-of-cgroup_mutex-or-.patch and it can be found in the queue-6.6 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. commit 6211c6507d37ea69ee7f63814e8122b3867cbb11 Author: Ming Lei <ming.lei@xxxxxxxxxx> Date: Fri Nov 17 10:35:22 2023 +0800 blk-throttle: fix lockdep warning of "cgroup_mutex or RCU read lock required!" [ Upstream commit 27b13e209ddca5979847a1b57890e0372c1edcee ] Inside blkg_for_each_descendant_pre(), both css_for_each_descendant_pre() and blkg_lookup() requires RCU read lock, and either cgroup_assert_mutex_or_rcu_locked() or rcu_read_lock_held() is called. Fix the warning by adding rcu read lock. Reported-by: Changhui Zhong <czhong@xxxxxxxxxx> Signed-off-by: Ming Lei <ming.lei@xxxxxxxxxx> Link: https://lore.kernel.org/r/20231117023527.3188627-2-ming.lei@xxxxxxxxxx Signed-off-by: Jens Axboe <axboe@xxxxxxxxx> Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> diff --git a/block/blk-throttle.c b/block/blk-throttle.c index 13e4377a8b286..16f5766620a41 100644 --- a/block/blk-throttle.c +++ b/block/blk-throttle.c @@ -1320,6 +1320,7 @@ static void tg_conf_updated(struct throtl_grp *tg, bool global) tg_bps_limit(tg, READ), tg_bps_limit(tg, WRITE), tg_iops_limit(tg, READ), tg_iops_limit(tg, WRITE)); + rcu_read_lock(); /* * Update has_rules[] flags for the updated tg's subtree. A tg is * considered to have rules if either the tg itself or any of its @@ -1347,6 +1348,7 @@ static void tg_conf_updated(struct throtl_grp *tg, bool global) this_tg->latency_target = max(this_tg->latency_target, parent_tg->latency_target); } + rcu_read_unlock(); /* * We're already holding queue_lock and know @tg is valid. Let's