On 04/08/2019 11:47 AM, Phil Auld wrote: > On Mon, Apr 08, 2019 at 11:39:36AM -0400 Waiman Long wrote: >> On 04/08/2019 11:14 AM, Tejun Heo wrote: >>> Hello, >>> >>> (cc'ing Waiman and copying the whole message for him) >>> >>> On Fri, Apr 05, 2019 at 11:36:59AM -0400, Joel Savitz wrote: >>>> If a process is limited by taskset (i.e. cpuset) to only be allowed to >>>> run on cpu N, and then cpu N is offlined via hotplug, the process will >>>> be assigned the current value of its cpuset cgroup's effective_cpus field >>>> in a call to do_set_cpus_allowed() in cpuset_cpus_allowed_fallback(). >>>> This argument's value does not makes sense for this case, because >>>> task_cs(tsk)->effective_cpus is modified by cpuset_hotplug_workfn() >>>> to reflect the new value of cpu_active_mask after cpu N is removed from >>>> the mask. While this may make sense for the cgroup affinity mask, it >>>> does not make sense on a per-task basis, as a task that was previously >>>> limited to only be run on cpu N will be limited to every cpu _except_ for >>>> cpu N after it is offlined/onlined via hotplug. >>>> >>>> Pre-patch behavior: >>>> >>>> $ grep Cpus /proc/$$/status >>>> Cpus_allowed: ff >>>> Cpus_allowed_list: 0-7 >>>> >>>> $ taskset -p 4 $$ >>>> pid 19202's current affinity mask: f >>>> pid 19202's new affinity mask: 4 >>>> >>>> $ grep Cpus /proc/self/status >>>> Cpus_allowed: 04 >>>> Cpus_allowed_list: 2 >>>> >>>> # echo off > /sys/devices/system/cpu/cpu2/online >>>> $ grep Cpus /proc/$$/status >>>> Cpus_allowed: 0b >>>> Cpus_allowed_list: 0-1,3 >>>> >>>> # echo on > /sys/devices/system/cpu/cpu2/online >>>> $ grep Cpus /proc/$$/status >>>> Cpus_allowed: 0b >>>> Cpus_allowed_list: 0-1,3 >>>> >>>> On a patched system, the final grep produces the following >>>> output instead: >>>> >>>> $ grep Cpus /proc/$$/status >>>> Cpus_allowed: ff >>>> Cpus_allowed_list: 0-7 >>>> >>>> This patch changes the above behavior by instead simply resetting the mask >>>> to cpu_possible_mask. >>>> >>>> Signed-off-by: Joel Savitz <jsavitz@xxxxxxxxxx> >>>> --- >>>> kernel/cgroup/cpuset.c | 2 +- >>>> 1 file changed, 1 insertion(+), 1 deletion(-) >>>> >>>> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c >>>> index 479743db6c37..5f65a2167bdf 100644 >>>> --- a/kernel/cgroup/cpuset.c >>>> +++ b/kernel/cgroup/cpuset.c >>>> @@ -3243,7 +3243,7 @@ void cpuset_cpus_allowed(struct task_struct *tsk, struct cpumask *pmask) >>>> void cpuset_cpus_allowed_fallback(struct task_struct *tsk) >>>> { >>>> rcu_read_lock(); >>>> - do_set_cpus_allowed(tsk, task_cs(tsk)->effective_cpus); >>>> + do_set_cpus_allowed(tsk, cpu_possible_mask); >>>> rcu_read_unlock(); >>> cpuset directly mangling with per-task masks has always been weird and >>> somewhat broken. Given the current cpuset behavior, I suppose this is >>> the better behavior. Waiman, what do you think? >>> >>> Thanks. >>> >> I think it may be better to use cpus_allowed in the case of fallback to >> make sure that the task isn't allowed to run on CPUs it is not supposed >> to run on, e.g. in a VM or container under cpuset control. For tasks in >> the top cpuset, it is the same as cpu_possible_mask. Of course, we are >> assuming that cpus_allowed has some sane value. BTW, there should be >> some comments about handling this case of cpu offlining. >> > This is setting cpus_allowed, so we can't use that here. This is the final > fallback. We've already tried parent cpuset bits at this point and found > nothing. If the parent had a mask that included a CPU that was still present > we would have already used that. I believe Joel's testing included using > a cpuset hierarchy and it did the right thing. > > I don't know if he has those notes still or not. > > > Cheers, > Phil I am referring to "cpus_allowed" in the current cpuset, not the cpus_allowed in the task itself. We can add one more fallback within the cpuset_cpus_allowed_fallback() that if the current task cpus_allowed is the same as cpuset's cpus_allowed, we fall back to cpu_possible_mask. -Longman