On Mon, Apr 04, 2022 at 07:37:24AM -1000, Tejun Heo <tj@xxxxxxxxxx> wrote: > And the suggested behavior doesn't make much sense to me. It doesn't > actually solve the underlying problem but instead always make css > destructions recursive which can lead to surprises for normal use cases. I also don't like the nested special-case use percpu_ref_kill(). I looked at this and my supposed solution turned out to be a revert of commit 3c606d35fe97 ("cgroup: prevent mount hang due to memory controller lifetime"). So at the unmount time it's necessary to distinguish children that are in the process of removal from children than are online or pinned indefinitely. What about: --- a/kernel/cgroup/cgroup.c +++ b/kernel/cgroup/cgroup.c @@ -2205,11 +2205,14 @@ static void cgroup_kill_sb(struct super_block *sb) struct cgroup_root *root = cgroup_root_from_kf(kf_root); /* - * If @root doesn't have any children, start killing it. + * If @root doesn't have any children held by residual state (e.g. + * memory controller), start killing it, flush workqueue to filter out + * transiently offlined children. * This prevents new mounts by disabling percpu_ref_tryget_live(). * * And don't kill the default root. */ + flush_workqueue(cgroup_destroy_wq); if (list_empty(&root->cgrp.self.children) && root != &cgrp_dfl_root && !percpu_ref_is_dying(&root->cgrp.self.refcnt)) { cgroup_bpf_offline(&root->cgrp); (I suspect there's technically still possible a race between concurrent unmount and the last rmdir but the flush on kill_sb path should be affordable and it prevents unnecessarily conserved cgroup roots.) Michal