On Thu, Jan 17, 2019 at 09:47:38AM +0100, Juri Lelli wrote: > No synchronisation mechanism exists between the cpuset subsystem and calls > to function __sched_setscheduler(). As such, it is possible that new root > domains are created on the cpuset side while a deadline acceptance test > is carried out in __sched_setscheduler(), leading to a potential oversell > of CPU bandwidth. > > Grab callback_lock from core scheduler, so to prevent situations such as > the one described above from happening. > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index f5263383170e..d928a42b8852 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -4224,6 +4224,13 @@ static int __sched_setscheduler(struct task_struct *p, > rq = task_rq_lock(p, &rf); > update_rq_clock(rq); > > + /* > + * Make sure we don't race with the cpuset subsystem where root > + * domains can be rebuilt or modified while operations like DL > + * admission checks are carried out. > + */ > + cpuset_read_only_lock(); > + > /* > * Changing the policy of the stop threads its a very bad idea: > */ > @@ -4285,6 +4292,7 @@ static int __sched_setscheduler(struct task_struct *p, > /* Re-check policy now with rq lock held: */ > if (unlikely(oldpolicy != -1 && oldpolicy != p->policy)) { > policy = oldpolicy = -1; > + cpuset_read_only_unlock(); > task_rq_unlock(rq, p, &rf); > goto recheck; > } > @@ -4342,6 +4350,7 @@ static int __sched_setscheduler(struct task_struct *p, > > /* Avoid rq from going away on us: */ > preempt_disable(); > + cpuset_read_only_unlock(); > task_rq_unlock(rq, p, &rf); > > if (pi) > @@ -4354,6 +4363,7 @@ static int __sched_setscheduler(struct task_struct *p, > return 0; > > unlock: > + cpuset_read_only_unlock(); > task_rq_unlock(rq, p, &rf); > return retval; > } Why take callback_lock inside rq->lock and not the other way around? AFAICT there is no pre-existing order so we can pick one here.