Hello, The patch aims to rebuild the sched domains if the cpus of an isolated partition are updated. Another issue might have been found, but it seemed to involve more complex modifications. To reproduce this issue: # mkdir cgroup # mount -t cgroup2 none cgroup/ # mkdir cgroup/A1 cgroup/B1 # echo "+cpuset" > cgroup/cgroup.subtree_control # echo 0-3 > cgroup/A1/cpuset.cpus # echo isolated > cgroup/A1/cpuset.cpus.partition # echo 4-6 > cgroup/B1/cpuset.cpus # cat cgroup/A1/cpuset.cpus.partition isolated // Make the isolated partition invalid as not having // an exclusive cpuset # echo 0-4 > A1/cpuset.cpus # cat cgroup/A1/cpuset.cpus.partition isolated invalid (Cpu list in cpuset.cpus not exclusive) // Expected result, internal state of the cgroup: // - prs_err: PERR_NOTEXCL // - flags: CS_CPU_EXCLUSIVE | CS_MEMORY_MIGRATE | CS_SCHED_LOAD_BALANCE // Make the isolated partition valid: # echo 0-3 > A1/cpuset.cpus # cat cgroup/A1/cpuset.cpus.partition isolated invalid (Cpu list in cpuset.cpus not exclusive) // Unexpected result, internal state of the cgroup: // - prs_err: PERR_NOTEXCL // - flags: CS_CPU_EXCLUSIVE | CS_MEMORY_MIGRATE | CS_SCHED_LOAD_BALANCE The issue seems to be that in update_cpumask(), the cgroup tree is only traversed if there is a need to invalidate the partitions. Cf. the case above, I think it should also be traversed if there is an invalid state that might be re-validated. Regards, Pierre Pierre Gondois (1): cgroup/cpuset: Rebuild sched domains if isolated partition changed kernel/cgroup/cpuset.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) -- 2.25.1