On 12/01/20 16:56, Will Deacon wrote: > On Fri, Nov 27, 2020 at 01:19:16PM +0000, Qais Yousef wrote: > > On 11/24/20 15:50, Will Deacon wrote: > > > > [...] > > > > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > > > index d2003a7d5ab5..818c8f7bdf2a 100644 > > > --- a/kernel/sched/core.c > > > +++ b/kernel/sched/core.c > > > @@ -1860,24 +1860,18 @@ void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask) > > > } > > > > > > /* > > > - * Change a given task's CPU affinity. Migrate the thread to a > > > - * proper CPU and schedule it away if the CPU it's executing on > > > - * is removed from the allowed bitmask. > > > - * > > > - * NOTE: the caller must have a valid reference to the task, the > > > - * task must not exit() & deallocate itself prematurely. The > > > - * call is not atomic; no spinlocks may be held. > > > + * Called with both p->pi_lock and rq->lock held; drops both before returning. > > > > nit: wouldn't it be better for the caller to acquire and release the locks? > > Not a big deal but it's always confusing when half of the work done outside the > > function and the other half done inside. > > That came up in the last version of the patches iirc, but the problem is > that __set_cpus_allowed_ptr_locked() can trigger migration, which can > drop the lock and take another one for the new runqueue. > > Given that this function is internal to the scheduler, I think we can > probably live with it. I guess task_rq_lock() always entails be prepared for surprises! Works for me. Thanks -- Qais Yousef