On 09/10/19 01:25, Scott Wood wrote: > On Tue, 2019-10-01 at 10:52 +0200, Juri Lelli wrote: > > On 30/09/19 11:24, Scott Wood wrote: > > > On Mon, 2019-09-30 at 09:12 +0200, Juri Lelli wrote: > > > > [...] > > > > > > Hummm, I was actually more worried about the fact that we call > > > > free_old_ > > > > cpuset_bw_dl() only if p->state != TASK_WAKING. > > > > > > Oh, right. :-P Not sure what I had in mind there; we want to call it > > > regardless. > > > > > > I assume we need rq->lock in free_old_cpuset_bw_dl()? So something like > > > > I think we can do with rcu_read_lock_sched() (see dl_task_can_attach()). > > RCU will keep dl_bw from being freed under us (we're implicitly in an RCU > sched read section due to atomic context). It won't stop rq->rd from > changing, but that could have happened before we took rq->lock. If the cpu > the task was running on was removed from the cpuset, and that raced with the > task being moved to a different cpuset, couldn't we end up erroneously > subtracting from the cpu's new root domain (or failing to subtract at all if > the old cpu's new cpuset happens to be the task's new cpuset)? I don't see > anything that forces tasks off of the cpu when a cpu is removed from a > cpuset (though maybe I'm not looking in the right place), so the race window > could be quite large. In any case, that's an existing problem that's not > going to get solved in this patchset. OK. So, mainline has got cpuset_read_lock() which should be enough to guard against changes to rd(s). I agree that rq->lock is needed here. Thanks, Juri