On !PREEMPT kernel, we can get below softlockup when doing stress testing with creating and destroying block cgroup repeatly. The reason is it may take a long time to acquire the queue's lock in the loop of blkcg_destroy_blkgs(), or the system can accumulate a huge number of blkgs in pathological cases. We can add a need_resched() check on each loop and release locks and do cond_resched() if true to avoid this issue, since the blkcg_destroy_blkgs() is not called from atomic contexts. [ 4757.010308] watchdog: BUG: soft lockup - CPU#11 stuck for 94s! [ 4757.010698] Call trace: [ 4757.010700] blkcg_destroy_blkgs+0x68/0x150 [ 4757.010701] cgwb_release_workfn+0x104/0x158 [ 4757.010702] process_one_work+0x1bc/0x3f0 [ 4757.010704] worker_thread+0x164/0x468 [ 4757.010705] kthread+0x108/0x138 Suggested-by: Tejun Heo <tj@xxxxxxxxxx> Signed-off-by: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> --- Changes from v1: - Add might_sleep() in blkcg_destroy_blkgs(). - Add an explicitly need_resched() check before releasing lock. - Add some comments. --- block/blk-cgroup.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c index 3465d6e..94eeed7 100644 --- a/block/blk-cgroup.c +++ b/block/blk-cgroup.c @@ -1016,6 +1016,8 @@ static void blkcg_css_offline(struct cgroup_subsys_state *css) */ void blkcg_destroy_blkgs(struct blkcg *blkcg) { + might_sleep(); + spin_lock_irq(&blkcg->lock); while (!hlist_empty(&blkcg->blkg_list)) { @@ -1031,6 +1033,17 @@ void blkcg_destroy_blkgs(struct blkcg *blkcg) cpu_relax(); spin_lock_irq(&blkcg->lock); } + + /* + * Given that the system can accumulate a huge number + * of blkgs in pathological cases, check to see if we + * need to rescheduling to avoid softlockup. + */ + if (need_resched()) { + spin_unlock_irq(&blkcg->lock); + cond_resched(); + spin_lock_irq(&blkcg->lock); + } } spin_unlock_irq(&blkcg->lock); -- 1.8.3.1