On 31 Mar 2023 17:26:51 +0200 Vincent Guittot <vincent.guittot@xxxxxxxxxx> > On Tue, 28 Mar 2023 at 13:06, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote: > > > > @@ -4832,6 +4834,18 @@ place_entity(struct cfs_rq *cfs_rq, stru > > lag = se->vlag; > > > > /* > > + * For latency sensitive tasks; those that have a shorter than > > + * average slice and do not fully consume the slice, transition > > + * to EEVDF placement strategy #2. > > + */ > > + if (sched_feat(PLACE_FUDGE) && > > + cfs_rq->avg_slice > se->slice * cfs_rq->avg_load) { > > + lag += vslice; > > + if (lag > 0) > > + lag = 0; > > By using different lag policies for tasks, doesn't this create > unfairness between tasks ? > > I wanted to stress this situation with a simple use case but it seems > that even without changing the slice, there is a fairness problem: > > Task A always run > Task B loops on : running 1ms then sleeping 1ms > default nice and latency nice prio bot both > each task should get around 50% of the time. > > The fairness is ok with tip/sched/core > but with eevdf, Task B only gets around 30% Convincing evidence for glitch in wakeup preempt.