On 13 Sep 2022 19:38:17 +0530 Pavan Kondeti <quic_pkondeti@xxxxxxxxxxx> > Hi > > The fact that psi_avgs_work()->collect_percpu_times()->get_recent_times() > run from a kworker thread, PSI_NONIDLE condition would be observed as > there is a RUNNING task. So we would always end up re-arming the work. > > If the work is re-armed from the psi_avgs_work() it self, the backing off > logic in psi_task_change() (will be moved to psi_task_switch soon) can't > help. The work is already scheduled. so we don't do anything there. > > Probably I am missing some thing here. Can you please clarify how we > shut off re-arming the psi avg work? Instead of open coding schedule_delayed_work() in bid to check if timer hits the idle task (see delayed_work_timer_fn()), the idle task is tracked in psi_task_switch() and checked by kworker to see if it preempted the idle task. Only for thoughts now. Hillf +++ b/kernel/sched/psi.c @@ -412,6 +412,8 @@ static u64 update_averages(struct psi_gr return avg_next_update; } +static DEFINE_PER_CPU(int, prev_task_is_idle); + static void psi_avgs_work(struct work_struct *work) { struct delayed_work *dwork; @@ -439,7 +441,7 @@ static void psi_avgs_work(struct work_st if (now >= group->avg_next_update) group->avg_next_update = update_averages(group, now); - if (nonidle) { + if (nonidle && 0 == per_cpu(prev_task_is_idle, raw_smp_processor_id())) { schedule_delayed_work(dwork, nsecs_to_jiffies( group->avg_next_update - now) + 1); } @@ -859,6 +861,7 @@ void psi_task_switch(struct task_struct if (prev->pid) { int clear = TSK_ONCPU, set = 0; + per_cpu(prev_task_is_idle, cpu) = 0; /* * When we're going to sleep, psi_dequeue() lets us * handle TSK_RUNNING, TSK_MEMSTALL_RUNNING and @@ -888,7 +891,8 @@ void psi_task_switch(struct task_struct for (; group; group = iterate_groups(prev, &iter)) psi_group_change(group, cpu, clear, set, now, true); } - } + } else + per_cpu(prev_task_is_idle, cpu) = 1; } /**