On Tue, 21 Feb 2023 at 13:53, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote: > > On Fri, Jan 13, 2023 at 03:12:30PM +0100, Vincent Guittot wrote: > > > diff --git a/include/linux/sched.h b/include/linux/sched.h > > index 6c61bde49152..38decae3e156 100644 > > --- a/include/linux/sched.h > > +++ b/include/linux/sched.h > > @@ -568,6 +568,8 @@ struct sched_entity { > > /* cached value of my_q->h_nr_running */ > > unsigned long runnable_weight; > > #endif > > + /* preemption offset in ns */ > > + long latency_offset; > > I wonder about the type here; does it make sense to have it depend on > the bitness; that is if s32 is big enough on 32bit then surely it is so > too on 64bit, and if not, then it should be unconditionally s64. I mainly wanted to stay aligned with the optimal width of the arch but 32bits is enough > > > > +static void set_latency_offset(struct task_struct *p) > > +{ > > + long weight = sched_latency_to_weight[p->latency_prio]; > > + s64 offset; > > + > > + offset = weight * get_sleep_latency(false); > > + offset = div_s64(offset, NICE_LATENCY_WEIGHT_MAX); > > + p->se.latency_offset = (long)offset; > > +} > > > +/* > > + * latency weight for wakeup preemption > > + */ > > +const int sched_latency_to_weight[40] = { > > + /* -20 */ -1024, -973, -922, -870, -819, > > + /* -15 */ -768, -717, -666, -614, -563, > > + /* -10 */ -512, -461, -410, -358, -307, > > + /* -5 */ -256, -205, -154, -102, -51, > > + /* 0 */ 0, 51, 102, 154, 205, > > + /* 5 */ 256, 307, 358, 410, 461, > > + /* 10 */ 512, 563, 614, 666, 717, > > + /* 15 */ 768, 819, 870, 922, 973, > > +}; > > I'm slightly confused by this table, isn't that simply the linear > function? Yes, I had in mind to use a nonlinear function at the beginning so the table. > > Isn't all that the same as: > > se->se.latency_offset = get_sleep_latency * nice / (NICE_LATENCY_WIDTH/2); > > ? The reason we have prio_to_weight[] is because it's an exponential, > which is a bit more cumbersome to calculate, but surely we can do a > linear function at runtime. > >