On Wed, Feb 19, 2020 at 03:47:29PM +0100, Peter Zijlstra wrote: > From: Paul E. McKenney <paulmck@xxxxxxxxxx> > > The rcu_nmi_enter_common() and rcu_nmi_exit_common() functions take an > "irq" parameter that indicates whether these functions are invoked from > an irq handler (irq==true) or an NMI handler (irq==false). However, > recent changes have applied notrace to a few critical functions such > that rcu_nmi_enter_common() and rcu_nmi_exit_common() many now rely > on in_nmi(). Note that in_nmi() works no differently than before, > but rather that tracing is now prohibited in code regions where in_nmi() > would incorrectly report NMI state. > > This commit therefore removes the "irq" parameter and inlines > rcu_nmi_enter_common() and rcu_nmi_exit_common() into rcu_nmi_enter() > and rcu_nmi_exit(), respectively. > > Signed-off-by: Paul E. McKenney <paulmck@xxxxxxxxxx> > Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx> Again, thank you. Would you like to also take the added comment for NOKPROBE_SYMBOL(), or would you prefer that I carry that separately? (I dropped it for now to avoid the conflict with the patch below.) Here is the latest version of that comment, posted by Steve Rostedt. Thanx, Paul /* * All functions called in the breakpoint trap handler (e.g. do_int3() * on x86), must not allow kprobes until the kprobe breakpoint handler * is called, otherwise it can cause an infinite recursion. * On some archs, rcu_nmi_enter() is called in the breakpoint handler * before the kprobe breakpoint handler is called, thus it must be * marked as NOKPROBE. */ > --- > kernel/rcu/tree.c | 45 ++++++++++++++------------------------------- > 1 file changed, 14 insertions(+), 31 deletions(-) > > --- a/kernel/rcu/tree.c > +++ b/kernel/rcu/tree.c > @@ -614,16 +614,18 @@ void rcu_user_enter(void) > } > #endif /* CONFIG_NO_HZ_FULL */ > > -/* > +/** > + * rcu_nmi_exit - inform RCU of exit from NMI context > + * > * If we are returning from the outermost NMI handler that interrupted an > * RCU-idle period, update rdp->dynticks and rdp->dynticks_nmi_nesting > * to let the RCU grace-period handling know that the CPU is back to > * being RCU-idle. > * > - * If you add or remove a call to rcu_nmi_exit_common(), be sure to test > + * If you add or remove a call to rcu_nmi_exit(), be sure to test > * with CONFIG_RCU_EQS_DEBUG=y. > */ > -static __always_inline void rcu_nmi_exit_common(bool irq) > +void rcu_nmi_exit(void) > { > struct rcu_data *rdp = this_cpu_ptr(&rcu_data); > > @@ -651,27 +653,16 @@ static __always_inline void rcu_nmi_exit > trace_rcu_dyntick(TPS("Startirq"), rdp->dynticks_nmi_nesting, 0, atomic_read(&rdp->dynticks)); > WRITE_ONCE(rdp->dynticks_nmi_nesting, 0); /* Avoid store tearing. */ > > - if (irq) > + if (!in_nmi()) > rcu_prepare_for_idle(); > > rcu_dynticks_eqs_enter(); > > - if (irq) > + if (!in_nmi()) > rcu_dynticks_task_enter(); > } > > /** > - * rcu_nmi_exit - inform RCU of exit from NMI context > - * > - * If you add or remove a call to rcu_nmi_exit(), be sure to test > - * with CONFIG_RCU_EQS_DEBUG=y. > - */ > -void rcu_nmi_exit(void) > -{ > - rcu_nmi_exit_common(false); > -} > - > -/** > * rcu_irq_exit - inform RCU that current CPU is exiting irq towards idle > * > * Exit from an interrupt handler, which might possibly result in entering > @@ -693,7 +684,7 @@ void rcu_nmi_exit(void) > void rcu_irq_exit(void) > { > lockdep_assert_irqs_disabled(); > - rcu_nmi_exit_common(true); > + rcu_nmi_exit(); > } > > /* > @@ -777,7 +768,7 @@ void rcu_user_exit(void) > #endif /* CONFIG_NO_HZ_FULL */ > > /** > - * rcu_nmi_enter_common - inform RCU of entry to NMI context > + * rcu_nmi_enter - inform RCU of entry to NMI context > * @irq: Is this call from rcu_irq_enter? > * > * If the CPU was idle from RCU's viewpoint, update rdp->dynticks and > @@ -786,10 +777,10 @@ void rcu_user_exit(void) > * long as the nesting level does not overflow an int. (You will probably > * run out of stack space first.) > * > - * If you add or remove a call to rcu_nmi_enter_common(), be sure to test > + * If you add or remove a call to rcu_nmi_enter(), be sure to test > * with CONFIG_RCU_EQS_DEBUG=y. > */ > -static __always_inline void rcu_nmi_enter_common(bool irq) > +void rcu_nmi_enter(void) > { > long incby = 2; > struct rcu_data *rdp = this_cpu_ptr(&rcu_data); > @@ -807,12 +798,12 @@ static __always_inline void rcu_nmi_ente > */ > if (rcu_dynticks_curr_cpu_in_eqs()) { > > - if (irq) > + if (!in_nmi()) > rcu_dynticks_task_exit(); > > rcu_dynticks_eqs_exit(); > > - if (irq) > + if (!in_nmi()) > rcu_cleanup_after_idle(); > > incby = 1; > @@ -834,14 +825,6 @@ static __always_inline void rcu_nmi_ente > rdp->dynticks_nmi_nesting + incby); > barrier(); > } > - > -/** > - * rcu_nmi_enter - inform RCU of entry to NMI context > - */ > -void rcu_nmi_enter(void) > -{ > - rcu_nmi_enter_common(false); > -} > NOKPROBE_SYMBOL(rcu_nmi_enter); > > /** > @@ -869,7 +852,7 @@ NOKPROBE_SYMBOL(rcu_nmi_enter); > void rcu_irq_enter(void) > { > lockdep_assert_irqs_disabled(); > - rcu_nmi_enter_common(true); > + rcu_nmi_enter(); > } > > /* > >