This is a note to let you know that I've just added the patch titled blk-throttle: fix lockdep warning of "cgroup_mutex or RCU read lock required!" to the 5.15-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: blk-throttle-fix-lockdep-warning-of-cgroup_mutex-or-.patch and it can be found in the queue-5.15 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. commit ee869353f31f7a2d4369586fe48c699d369b177d Author: Ming Lei <ming.lei@xxxxxxxxxx> Date: Fri Nov 17 10:35:22 2023 +0800 blk-throttle: fix lockdep warning of "cgroup_mutex or RCU read lock required!" [ Upstream commit 27b13e209ddca5979847a1b57890e0372c1edcee ] Inside blkg_for_each_descendant_pre(), both css_for_each_descendant_pre() and blkg_lookup() requires RCU read lock, and either cgroup_assert_mutex_or_rcu_locked() or rcu_read_lock_held() is called. Fix the warning by adding rcu read lock. Reported-by: Changhui Zhong <czhong@xxxxxxxxxx> Signed-off-by: Ming Lei <ming.lei@xxxxxxxxxx> Link: https://lore.kernel.org/r/20231117023527.3188627-2-ming.lei@xxxxxxxxxx Signed-off-by: Jens Axboe <axboe@xxxxxxxxx> Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> diff --git a/block/blk-throttle.c b/block/blk-throttle.c index 68cf8dbb4c67a..4da4b25b12f48 100644 --- a/block/blk-throttle.c +++ b/block/blk-throttle.c @@ -1415,6 +1415,7 @@ static void tg_conf_updated(struct throtl_grp *tg, bool global) tg_bps_limit(tg, READ), tg_bps_limit(tg, WRITE), tg_iops_limit(tg, READ), tg_iops_limit(tg, WRITE)); + rcu_read_lock(); /* * Update has_rules[] flags for the updated tg's subtree. A tg is * considered to have rules if either the tg itself or any of its @@ -1442,6 +1443,7 @@ static void tg_conf_updated(struct throtl_grp *tg, bool global) this_tg->latency_target = max(this_tg->latency_target, parent_tg->latency_target); } + rcu_read_unlock(); /* * We're already holding queue_lock and know @tg is valid. Let's