On Mon, 2019-09-30 at 09:12 +0200, Juri Lelli wrote: > On 27/09/19 11:40, Scott Wood wrote: > > On Fri, 2019-09-27 at 10:11 +0200, Juri Lelli wrote: > > > Hi Scott, > > > > > > On 27/07/19 00:56, Scott Wood wrote: > > > > With the changes to migrate disabling, ->set_cpus_allowed() no > > > > longer > > > > gets deferred until migrate_enable(). To avoid releasing the > > > > bandwidth > > > > while the task may still be executing on the old CPU, move the > > > > subtraction > > > > to ->migrate_task_rq(). > > > > > > > > Signed-off-by: Scott Wood <swood@xxxxxxxxxx> > > > > --- > > > > kernel/sched/deadline.c | 67 +++++++++++++++++++++++--------------- > > > > ---- > > > > ------- > > > > 1 file changed, 31 insertions(+), 36 deletions(-) > > > > > > > > diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c > > > > index c18be51f7608..2f18d0cf1b56 100644 > > > > --- a/kernel/sched/deadline.c > > > > +++ b/kernel/sched/deadline.c > > > > @@ -1606,14 +1606,42 @@ static void yield_task_dl(struct rq *rq) > > > > return cpu; > > > > } > > > > > > > > +static void free_old_cpuset_bw_dl(struct rq *rq, struct task_struct > > > > *p) > > > > +{ > > > > + struct root_domain *src_rd = rq->rd; > > > > + > > > > + /* > > > > + * Migrating a SCHED_DEADLINE task between exclusive > > > > + * cpusets (different root_domains) entails a bandwidth > > > > + * update. We already made space for us in the destination > > > > + * domain (see cpuset_can_attach()). > > > > + */ > > > > + if (!cpumask_intersects(src_rd->span, p->cpus_ptr)) { > > > > + struct dl_bw *src_dl_b; > > > > + > > > > + src_dl_b = dl_bw_of(cpu_of(rq)); > > > > + /* > > > > + * We now free resources of the root_domain we are > > > > migrating > > > > + * off. In the worst case, sched_setattr() may > > > > temporary > > > > fail > > > > + * until we complete the update. > > > > + */ > > > > + raw_spin_lock(&src_dl_b->lock); > > > > + __dl_sub(src_dl_b, p->dl.dl_bw, > > > > dl_bw_cpus(task_cpu(p))); > > > > + raw_spin_unlock(&src_dl_b->lock); > > > > + } > > > > +} > > > > + > > > > static void migrate_task_rq_dl(struct task_struct *p, int new_cpu > > > > __maybe_unused) > > > > { > > > > struct rq *rq; > > > > > > > > - if (p->state != TASK_WAKING) > > > > + rq = task_rq(p); > > > > + > > > > + if (p->state != TASK_WAKING) { > > > > + free_old_cpuset_bw_dl(rq, p); > > > > > > What happens if a DEADLINE task is moved between cpusets while it was > > > sleeping? Don't we miss removing from the old cpuset if the task gets > > > migrated on wakeup? > > > > In that case set_task_cpu() is called by ttwu after setting state to > > TASK_WAKING. > > Right. > > > I guess it could be annoying if the task doesn't wake up for a > > long time and therefore doesn't release the bandwidth until then. > > Hummm, I was actually more worried about the fact that we call free_old_ > cpuset_bw_dl() only if p->state != TASK_WAKING. Oh, right. :-P Not sure what I had in mind there; we want to call it regardless. I assume we need rq->lock in free_old_cpuset_bw_dl()? So something like this: if (p->state == TASK_WAITING) raw_spin_lock(&rq->lock); free_old_cpuset_bw_dl(rq, p); if (p->state != TASK_WAITING) return; if (p->dl.dl_non_contending) { .... BTW, is the full cpumask_intersects() necessary or would it suffice to see that the new cpu is not in the old span? -Scott