On 8/16/22 13:20, Tejun Heo wrote:
Hello,
So, overall I think this is the right direction.
+static int cpuset_set_cpus_allowed_ptr(struct task_struct *p,
+ const struct cpumask *mask)
+{
+ if (p->user_cpus_ptr) {
+ cpumask_var_t new_mask;
+
+ if (alloc_cpumask_var(&new_mask, GFP_KERNEL) &&
+ copy_user_cpus_mask(p, new_mask) &&
+ cpumask_and(new_mask, new_mask, mask)) {
+ int ret = set_cpus_allowed_ptr(p, new_mask);
+
+ free_cpumask_var(new_mask);
+ return ret;
+ }
+ free_cpumask_var(new_mask);
+ }
+
+ return set_cpus_allowed_ptr(p, mask);
+}
But this seems racy to me. Let's say attach and setaffinity race. The
expectation should be that we'd end up with the same eventual mask no matter
what the operation order may be. The above code wouldn't do that, right?
There's nothing synchronizing the two and if setaffinity takes place between
the user_cpus_ptr test and set_cpus_allowed_ptr(), it'd get ignored.
Yes, a race like this is possible. To completely eliminate the race may
require taking task_rq_lock() and then calling
__set_cpus_allowed_ptr_locked() which is internal to kernel/sched/core.c.
Alternatively, we can check user_cpus_ptr again after the scond
set_cpus_allowed_ptr() and retry it with the other path if set. That
will probably address your concern. Please let me know if you are OK
with that.
Cheers,
Longman
This gotta be more integrated. There is what the user requested and there
are restrictions from CPU hotplug state and cpuset. All three should be
synchronized so that there is one synchronzied way to obtain and apply the
current effective mask.
Thanks.