On Mon, Jul 13, 2020 at 03:27:55PM +0100, Qais Yousef wrote: > On 07/13/20 15:35, Peter Zijlstra wrote: > > > I protect this with rcu_read_lock() which as far as I know synchronize_rcu() > > > will ensure if we do the update during this section; we'll wait for it to > > > finish. New forkees entering the rcu_read_lock() section will be okay because > > > they should see the new value. > > > > > > spinlocks() and mutexes seemed inferior to this approach. > > > > Well, didn't we just write in another patch that p->uclamp_* was > > protected by both rq->lock and p->pi_lock? > > __setscheduler_uclamp() path is holding these locks, not sure by design or it > just happened this path holds the lock. I can't see the lock in the > uclamp_fork() path. But it's hard sometimes to unfold the layers of callers, > especially not all call sites are annotated for which lock is assumed to be > held. > > Is it safe to hold the locks in uclamp_fork() while the task is still being > created? My new code doesn't hold it of course. > > We can enforce this rule if you like. Though rcu critical section seems lighter > weight to me. > > If all of this does indeed start looking messy we can put the update in > a delayed worker and schedule that instead of doing synchronous setup. sched_fork() doesn't need the locks, because at that point the task isn't visible yet. HOWEVER, sched_post_fork() is after pid-hash (per design) and thus the task is visible, so we can race against sched_setattr(), so we'd better hold those locks anyway.