Re: need_heavy_qs flag for PREEMPT=y kernels

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Aug 11, 2019 at 02:16:46PM -0700, Paul E. McKenney wrote:
> On Sun, Aug 11, 2019 at 02:34:08PM -0400, Joel Fernandes wrote:
> > On Sun, Aug 11, 2019 at 2:08 PM Joel Fernandes <joel@xxxxxxxxxxxxxxxxx> wrote:
> > >
> > > Hi Paul, everyone,
> > >
> > > I noticed on reading code that the need_heavy_qs check and
> > > rcu_momentary_dyntick_idle() is only called for !PREEMPT kernels. Don't we
> > > need to call this for PREEMPT kernels for the benefit of nohz_full CPUs?
> > >
> > > Consider the following events:
> > > 1. Kernel is PREEMPT=y configuration.
> > > 2. CPU 2 is a nohz_full CPU running only a single task and the tick is off.
> > > 3. CPU 2 is running only in kernel mode and does not enter user mode or idle.
> > > 4. Grace period thread running on CPU 3 enter the fqs loop.
> > > 5. Enough time passes and it sets the need_heavy_qs for CPU2.
> > > 6. CPU 2 is still in kernel mode but does cond_resched().
> > > 7. cond_resched() does not call rcu_momentary_dyntick_idle() because PREEMPT=y.
> > >
> > > Is 7. not calling rcu_momentary_dyntick_idle() a lost opportunity for the FQS
> > > loop to detect that the CPU has crossed a quiescent point?
> > >
> > > Is this done so that cond_resched() is fast for PREEMPT=y kernels?
> > 
> > Oh, so I take it this bit of code in rcu_implicit_dynticks_qs(), with
> > the accompanying comments, takes care of the scenario I describe?
> > Another way could be just call rcu_momentary_dyntick_idle() during
> > cond_resched() for nohz_full CPUs? Is that pricey?
> >         /*
> >          * NO_HZ_FULL CPUs can run in-kernel without rcu_sched_clock_irq!
> >          * The above code handles this, but only for straight cond_resched().
> >          * And some in-kernel loops check need_resched() before calling
> >          * cond_resched(), which defeats the above code for CPUs that are
> >          * running in-kernel with scheduling-clock interrupts disabled.
> >          * So hit them over the head with the resched_cpu() hammer!
> >          */
> >         if (tick_nohz_full_cpu(rdp->cpu) &&
> >                    time_after(jiffies,
> >                               READ_ONCE(rdp->last_fqs_resched) + jtsq * 3)) {
> >                 resched_cpu(rdp->cpu);
> >                 WRITE_ONCE(rdp->last_fqs_resched, jiffies);
> >         }
> 
> Yes, for NO_HZ_FULL=y&&PREEMPT=y kernels.

Actually, I was only referring to the case of NO_HZ_FULL=y being the
troublesome one (i.e. rcu_need_heavy_qs flag would have no effect).

For NO_HZ_FULL=n, I have full confidence the scheduler tick will notice
rcu_urgent_qs and do a reschedule. The ensuing softirq then does the needful
to help end the grace period.

> Your thought of including rcu_momentary_dyntick_idle() would function
> correctly, but would cause performance issues.  Even adding additional
> compares and branches in that hot codepath is visible to 0day test robot!
> So adding a read-modify-write atomic operation to that code path would
> get attention of the wrong kind.  ;-)

But wouldn't these performance issues also be visible with
NO_HZ_FULL=y && PREEMPT=n?  Why is PREEMPT=n made an exception? Is it that
0day doesn't test this combination much? :-D

thanks,

 - Joel




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux