Hello. On Mon, Feb 28, 2022 at 06:24:07PM +0100, Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> wrote: > [...] > cpuset_attach() cpu hotplug > --------------------------- ---------------------- > down_write(cpuset_rwsem) > guarantee_online_cpus() // (load cpus_attach) > sched_cpu_deactivate > set_cpu_active() > // will change cpu_active_mask > set_cpus_allowed_ptr(cpus_attach) > __set_cpus_allowed_ptr_locked() > // (if the intersection of cpus_attach and > cpu_active_mask is empty, will return -EINVAL) > up_write(cpuset_rwsem) > [...] > --- a/kernel/cgroup/cpuset.c > +++ b/kernel/cgroup/cpuset.c > @@ -1528,6 +1528,7 @@ static void cpuset_attach(struct cgroup_ > cgroup_taskset_first(tset, &css); > cs = css_cs(css); > > + cpus_read_lock(); > mutex_lock(&cpuset_mutex); This backport (and possible older kernels) looks suspicious since it comes before commit d74b27d63a8b ("cgroup/cpuset: Change cpuset_rwsem and hotplug lock order") v5.4-rc1~176^2~30 when the locking order was: cpuset lock, cpus lock. At the same time it also comes before commit 710da3c8ea7d ("sched/core: Prevent race condition between cpuset and __sched_setscheduler()") v5.4-rc1~176^2~27 when neither __sched_setscheduler() cared and this race is similar. (The swapped locking may still conflict with rebuild_sched_domains() before d74b27d63a8b.) Michal