On 15/03/23 14:49, Qais Yousef wrote: > On 03/15/23 12:18, Juri Lelli wrote: ... > > +void inc_dl_tasks_cs(struct task_struct *p) > > +{ > > + struct cpuset *cs = task_cs(p); > > nit: > > I *think* task_cs() assumes rcu_read_lock() is held, right? > > Would it make sense to WARN_ON(!rcu_read_lock_held()) to at least > annotate the deps? Think we have that check in task_css_set_check()? > Or maybe task_cs() should do that.. > > > + > > + cs->nr_deadline_tasks++; > > +} > > + > > +void dec_dl_tasks_cs(struct task_struct *p) > > +{ > > + struct cpuset *cs = task_cs(p); > > nit: ditto > > > + > > + cs->nr_deadline_tasks--; > > +} > > + ... > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > > index 5902cbb5e751..d586a8440348 100644 > > --- a/kernel/sched/core.c > > +++ b/kernel/sched/core.c > > @@ -7683,6 +7683,16 @@ static int __sched_setscheduler(struct task_struct *p, > > goto unlock; > > } > > > > + /* > > + * In case a task is setscheduled to SCHED_DEADLINE, or if a task is > > + * moved to a different sched policy, we need to keep track of that on > > + * its cpuset (for correct bandwidth tracking). > > + */ > > + if (dl_policy(policy) && !dl_task(p)) > > + inc_dl_tasks_cs(p); > > + else if (dl_task(p) && !dl_policy(policy)) > > + dec_dl_tasks_cs(p); > > + > > Would it be better to use switched_to_dl()/switched_from_dl() instead to > inc/dec_dl_tasks_cs()? Ah, makes sense. I'll play with this. Thanks, Juri