Hello Vincent On 16 Sep 2022 10:03:02 +0200 Vincent Guittot <vincent.guittot@xxxxxxxxxx> wrote: > > @@ -4606,6 +4608,7 @@ check_preempt_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr) > > se = __pick_first_entity(cfs_rq); > delta = curr->vruntime - se->vruntime; > + delta -= wakeup_latency_gran(curr, se); > > if (delta < 0) > return; What is derived from the latency nice you added is the runtime granulaity which has a role in preempting the current task. Given the same defination of latency nice as the nice, the runtime granularity can be computed without introducing the latency nice. Only for thoughts now. Hillf +++ b/kernel/sched/fair.c @@ -4569,7 +4569,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, st static void check_preempt_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr) { - unsigned long ideal_runtime, delta_exec; + unsigned long ideal_runtime, delta_exec, granu; struct sched_entity *se; s64 delta; @@ -4594,6 +4594,14 @@ check_preempt_tick(struct cfs_rq *cfs_rq return; se = __pick_first_entity(cfs_rq); + + granu = sysctl_sched_min_granularity + + (ideal_runtime - sysctl_sched_min_granularity) * + (se->latency_nice + 20) / LATENCY_NICE_WIDTH; + + if (delta_exec < granu) + return; + delta = curr->vruntime - se->vruntime; if (delta < 0)