Hi Scott, On 27/07/19 00:56, Scott Wood wrote: > With the changes to migrate disabling, ->set_cpus_allowed() no longer > gets deferred until migrate_enable(). To avoid releasing the bandwidth > while the task may still be executing on the old CPU, move the subtraction > to ->migrate_task_rq(). > > Signed-off-by: Scott Wood <swood@xxxxxxxxxx> > --- > kernel/sched/deadline.c | 67 +++++++++++++++++++++++-------------------------- > 1 file changed, 31 insertions(+), 36 deletions(-) > > diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c > index c18be51f7608..2f18d0cf1b56 100644 > --- a/kernel/sched/deadline.c > +++ b/kernel/sched/deadline.c > @@ -1606,14 +1606,42 @@ static void yield_task_dl(struct rq *rq) > return cpu; > } > > +static void free_old_cpuset_bw_dl(struct rq *rq, struct task_struct *p) > +{ > + struct root_domain *src_rd = rq->rd; > + > + /* > + * Migrating a SCHED_DEADLINE task between exclusive > + * cpusets (different root_domains) entails a bandwidth > + * update. We already made space for us in the destination > + * domain (see cpuset_can_attach()). > + */ > + if (!cpumask_intersects(src_rd->span, p->cpus_ptr)) { > + struct dl_bw *src_dl_b; > + > + src_dl_b = dl_bw_of(cpu_of(rq)); > + /* > + * We now free resources of the root_domain we are migrating > + * off. In the worst case, sched_setattr() may temporary fail > + * until we complete the update. > + */ > + raw_spin_lock(&src_dl_b->lock); > + __dl_sub(src_dl_b, p->dl.dl_bw, dl_bw_cpus(task_cpu(p))); > + raw_spin_unlock(&src_dl_b->lock); > + } > +} > + > static void migrate_task_rq_dl(struct task_struct *p, int new_cpu __maybe_unused) > { > struct rq *rq; > > - if (p->state != TASK_WAKING) > + rq = task_rq(p); > + > + if (p->state != TASK_WAKING) { > + free_old_cpuset_bw_dl(rq, p); What happens if a DEADLINE task is moved between cpusets while it was sleeping? Don't we miss removing from the old cpuset if the task gets migrated on wakeup? Thanks, Juri