With the consolidation of put_prev_task/set_next_task(), see commit 436f3eed5c69 ("sched: Combine the last put_prev_task() and the first set_next_task()"), we are now skipping the transition between these two functions when the previous and the next tasks are the same. As a result, ops.update_idle() is now called only once when the CPU transitions to the idle class. If the CPU stays active (e.g., through a call to scx_bpf_kick_cpu()), ops.update_idle() will not be triggered again since the task remains unchanged (rq->idle). While this behavior seems generally correct, it can cause issues in certain sched_ext scenarios. For example, a BPF scheduler might use logic like the following to keep the CPU active under specific conditions: void BPF_STRUCT_OPS(sched_update_idle, s32 cpu, bool idle) { if (!idle) return; if (condition) scx_bpf_kick_cpu(cpu, 0); } A call to scx_bpf_kick_cpu() wakes up the CPU, so in theory, ops.update_idle() should be triggered again until the condition becomes false. However, this doesn't happen, and scx_bpf_kick_cpu() doesn't produce the expected effect. In practice, this change badly impacts performance in user-space schedulers that rely on ops.update_idle() to activate user-space components. For instance, in the case of scx_rustland, performance drops significantly (e.g., gaming benchmarks fall from ~60fps to ~10fps). To address this, trigger ops.update_idle() from pick_task_idle() rather than set_next_task_idle(). This restores the correct behavior of ops.update_idle() and it allows to fix the performance regression in scx_rustland. Fixes: 7c65ae81ea86 ("sched_ext: Don't call put_prev_task_scx() before picking the next task") Signed-off-by: Andrea Righi <andrea.righi@xxxxxxxxx> --- kernel/sched/idle.c | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) ChangeLog v2 -> v3: - add a comment to clarify why we need to update the scx idle state in pick_task() ChangeLog v1 -> v2: - move the logic from put_prev_set_next_task() to scx_update_idle() diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c index d2f096bb274c..d336a05a6006 100644 --- a/kernel/sched/idle.c +++ b/kernel/sched/idle.c @@ -459,13 +459,26 @@ static void put_prev_task_idle(struct rq *rq, struct task_struct *prev, struct t static void set_next_task_idle(struct rq *rq, struct task_struct *next, bool first) { update_idle_core(rq); - scx_update_idle(rq, true); schedstat_inc(rq->sched_goidle); next->se.exec_start = rq_clock_task(rq); } struct task_struct *pick_task_idle(struct rq *rq) { + /* + * When switching from a non-idle to the idle class, .set_next_task() + * is called only once during the transition. + * + * However, the CPU may remain active for multiple rounds (e.g., by + * calling scx_bpf_kick_cpu() from the ops.update_idle() callback). + * + * In such cases, we need to keep updating the scx idle state to + * properly re-trigger the ops.update_idle() callback. + * + * Updating the state in .pick_task(), instead of .set_next_task(), + * ensures correct handling of scx idle state transitions. + */ + scx_update_idle(rq, true); return rq->idle; } -- 2.47.0