Hello Vincent, Peter,
On 6/16/2024 8:27 PM, Vincent Guittot wrote:
On Sat, 15 Jun 2024 at 03:28, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
On Fri, Jun 14, 2024 at 12:48:37PM +0200, Vincent Guittot wrote:
On Fri, 14 Jun 2024 at 11:28, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
Vincent [5] pointed out a case where the idle load kick will fail to
run on an idle CPU since the IPI handler launching the ILB will check
for need_resched(). In such cases, the idle CPU relies on
newidle_balance() to pull tasks towards itself.
Is this the need_resched() in _nohz_idle_balance() ? Should we change
this to 'need_resched() && (rq->nr_running || rq->ttwu_pending)' or
something long those lines?
It's not only this but also in do_idle() as well which exits the loop
to look for tasks to schedule
Is that really a problem? Reading the initial email the problem seems to
be newidle balance, not hitting schedule. Schedule should be fairly
quick if there's nothing to do, no?
There are 2 problems:
- Because of NEED_RESCHED being set, we go through the full schedule
path for no reason and we finally do a sched_balance_newidle()
Peter's patch up in the thread seems to improve the above case by
speeding up the schedule() loop similar to the very first solution
I tried with
https://lore.kernel.org/lkml/20240119084548.2788-1-kprateek.nayak@xxxxxxx/
I do see same level of improvements (if not better) with Peter's
SM_IDLE solution:
==================================================================
Test : ipistorm (modified)
Units : Normalized runtime
Interpretation: Lower is better
Statistic : AMean
==================================================================
kernel: time [pct imp]
tip:sched/core 1.00 [baseline]
tip:sched/core + revert 0.40 [60.26%]
tip:sched/core + TIF_NOTIFY_IPI 0.46 [54.88%]
tip:sched/core + SM_IDLE 0.38 [72.64%]
- Because of need_resched being set o wake up the cpu, we will not
kick the softirq to run the nohz idle load balance and get a chance to
pull a task on an idle CPU
However, this issues with need_resched() still remains. Any
need_resched() check within an interrupt context will return true if the
target CPU is perceived to be in a polling idle state by the sender as a
result of the optimization in commit b2a02fc43a1f ("smp: Optimize
send_call_function_single_ipi()").
If TIF_POLLING_NRFLAG is defined by an arch, do_idle() will set the
flag until the path hits call_cpuidle() where the flag is cleared just
before handing off the state entry to the cpuidle driver. An incoming
interrupt in this window will allow the idle path to bail early and
return before calling the driver specific routine since it'll be
indicated by TIF_NEED_RESCHED being set in the idle task's thread info.
Beyond that point, the cpuidle driver handles the idle entry.
I think an arch may define TIF_POLLING_NRFLAG just to utilize this
optimization in the generic idle path to answer Vincent's observation
on ARM32 having TIF_POLLING_NRFLAG.
I mean, it's fairly trivial to figure out if there really is going to be
work there.
Using an alternate flag instead of NEED_RESCHED to indicate a pending
IPI was suggested as the correct approach to solve this problem on the
same thread.
So adding per-arch changes for this seems like something we shouldn't
unless there really is no other sane options.
That is, I really think we should start with something like the below
and then fix any fallout from that.
The main problem is that need_resched becomes somewhat meaningless
because it doesn't only mean "I need to resched a task" and we have
to add more tests around even for those not using polling
True, however we already had some of that by having the wakeup list,
that made nr_running less 'reliable'.
The thing is, most architectures seem to have the TIF_POLLING_NRFLAG
bit, even if their main idle routine isn't actually using it, much of
Yes, I'm surprised that Arm arch has the TIF_POLLING_NRFLAG whereas it
has never been supported by the arch
the idle loop until it hits the arch idle will be having it set and will
thus tickle these cases *sometimes*.
[..snip..]
--
Thanks and Regards,
Prateek