Re: need_heavy_qs flag for PREEMPT=y kernels

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Aug 11, 2019 at 04:30:24PM -0700, Paul E. McKenney wrote:
> On Sun, Aug 11, 2019 at 05:25:05PM -0400, Joel Fernandes wrote:
> > On Sun, Aug 11, 2019 at 02:16:46PM -0700, Paul E. McKenney wrote:
> > > On Sun, Aug 11, 2019 at 02:34:08PM -0400, Joel Fernandes wrote:
> > > > On Sun, Aug 11, 2019 at 2:08 PM Joel Fernandes <joel@xxxxxxxxxxxxxxxxx> wrote:
> > > > >
> > > > > Hi Paul, everyone,
> > > > >
> > > > > I noticed on reading code that the need_heavy_qs check and
> > > > > rcu_momentary_dyntick_idle() is only called for !PREEMPT kernels. Don't we
> > > > > need to call this for PREEMPT kernels for the benefit of nohz_full CPUs?
> > > > >
> > > > > Consider the following events:
> > > > > 1. Kernel is PREEMPT=y configuration.
> > > > > 2. CPU 2 is a nohz_full CPU running only a single task and the tick is off.
> > > > > 3. CPU 2 is running only in kernel mode and does not enter user mode or idle.
> > > > > 4. Grace period thread running on CPU 3 enter the fqs loop.
> > > > > 5. Enough time passes and it sets the need_heavy_qs for CPU2.
> > > > > 6. CPU 2 is still in kernel mode but does cond_resched().
> > > > > 7. cond_resched() does not call rcu_momentary_dyntick_idle() because PREEMPT=y.
> > > > >
> > > > > Is 7. not calling rcu_momentary_dyntick_idle() a lost opportunity for the FQS
> > > > > loop to detect that the CPU has crossed a quiescent point?
> > > > >
> > > > > Is this done so that cond_resched() is fast for PREEMPT=y kernels?
> > > > 
> > > > Oh, so I take it this bit of code in rcu_implicit_dynticks_qs(), with
> > > > the accompanying comments, takes care of the scenario I describe?
> > > > Another way could be just call rcu_momentary_dyntick_idle() during
> > > > cond_resched() for nohz_full CPUs? Is that pricey?
> > > >         /*
> > > >          * NO_HZ_FULL CPUs can run in-kernel without rcu_sched_clock_irq!
> > > >          * The above code handles this, but only for straight cond_resched().
> > > >          * And some in-kernel loops check need_resched() before calling
> > > >          * cond_resched(), which defeats the above code for CPUs that are
> > > >          * running in-kernel with scheduling-clock interrupts disabled.
> > > >          * So hit them over the head with the resched_cpu() hammer!
> > > >          */
> > > >         if (tick_nohz_full_cpu(rdp->cpu) &&
> > > >                    time_after(jiffies,
> > > >                               READ_ONCE(rdp->last_fqs_resched) + jtsq * 3)) {
> > > >                 resched_cpu(rdp->cpu);
> > > >                 WRITE_ONCE(rdp->last_fqs_resched, jiffies);
> > > >         }
> > > 
> > > Yes, for NO_HZ_FULL=y&&PREEMPT=y kernels.
> > 
> > Actually, I was only referring to the case of NO_HZ_FULL=y being the
> > troublesome one (i.e. rcu_need_heavy_qs flag would have no effect).
> > 
> > For NO_HZ_FULL=n, I have full confidence the scheduler tick will notice
> > rcu_urgent_qs and do a reschedule. The ensuing softirq then does the needful
> > to help end the grace period.
> 
> Whew!
> 
> That confidence was not at all apparent in your initial email.

Sorry, I should improve the quality of my emails for sure.

> > > Your thought of including rcu_momentary_dyntick_idle() would function
> > > correctly, but would cause performance issues.  Even adding additional
> > > compares and branches in that hot codepath is visible to 0day test robot!
> > > So adding a read-modify-write atomic operation to that code path would
> > > get attention of the wrong kind.  ;-)
> > 
> > But wouldn't these performance issues also be visible with
> > NO_HZ_FULL=y && PREEMPT=n?
> 
> In PREEMPT=n, cond_resched() already has a check, and with quite a bit
> of care it is possible to introduce another.

Actually, may be I did not express properly. I mean the performance issues
that 0day found (that you mentioned above) with invoking
rcu_momentary_dyntick_idle() from hotpaths should also have been found with
PREEMPT=n. However, sounds like it has found these issues only when invoking
rcu_momentary_dyntick_idle() with PREEMPT=y - which as you said is the reason
rcu_momentary_dyntick_idle() is not invoked in PREEMPT=y. So I was asking,
why are these same performance issues not seen with PREEMPT=n? And if they
are seen, why do we invoke rcu_momentary_dyntick_idle() in mainline for
PREEMPT=n kernels?


> >                             Why is PREEMPT=n made an exception?
> 
> The exception is actually CONFIG_NO_HZ_FULL=y && CONFIG_PREEMPT=y.
> In that case, we can rely on neither the scheduling-clock interrupt
> nor on cond_resched().  In the other three cases, we have one or both.

Agreed, that's what I found weird. PREEMPT=y with NOHZ_FULL=y has no support
to rely on. While PREEMPT=n with NOHZ_FULL=y does. So my question was about
the rationale for why there is this difference. Either we invoke
rcu_momentary_dyntick_idle for both PREEMPT options, or we don't invoke it
for either. Why invoke it for one but not the other?

> Next question:  Why does rcu_implicit_dynticks_qs() check only for
> tick_nohz_full_cpu() and not also IS_ENABLED(CONFIG_PREEMPT)?  After
> all, a nohz_full CPU in a !CONFIG_PREEMPT kernel should be able to
> rely on cond_resched(), right?
> 
> Should this change?  Why or why not?

Let me think more about this :) I have an answer in mind but I will think a
bit more about it and responsd :)

thanks,

 - Joel




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux