On Fri, Feb 21, 2020 at 02:34:27PM +0100, Peter Zijlstra wrote: > To facilitate tracers that need RCU, add some helpers to wrap the > magic required. > > The problem is that we can call into tracers (trace events and > function tracing) while RCU isn't watching and this can happen from > any context, including NMI. > > Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx> > Reviewed-by: Paul E. McKenney <paulmck@xxxxxxxxxx> > --- > include/linux/rcupdate.h | 29 +++++++++++++++++++++++++++++ > 1 file changed, 29 insertions(+) > > --- a/include/linux/rcupdate.h > +++ b/include/linux/rcupdate.h > @@ -175,6 +175,35 @@ do { \ > #error "Unknown RCU implementation specified to kernel configuration" > #endif > > +/** > + * trace_rcu_enter - Force RCU to be active, for code that needs RCU readers > + * > + * Very similar to RCU_NONIDLE() above. > + * > + * Tracing can happen while RCU isn't active yet, for instance in the idle loop > + * between rcu_idle_enter() and rcu_idle_exit(), or early in exception entry. > + * RCU will happily ignore any read-side critical sections in this case. > + * > + * This function ensures that RCU is aware hereafter and the code can readily > + * rely on RCU read-side critical sections working as expected. > + * > + * This function is NMI safe -- provided in_nmi() is correct and will nest up-to > + * INT_MAX/2 times. > + */ > +static inline int trace_rcu_enter(void) > +{ > + int state = !rcu_is_watching(); > + if (state) > + rcu_irq_enter_irqsave(); > + return state; > +} > + > +static inline void trace_rcu_exit(int state) > +{ > + if (state) > + rcu_irq_exit_irqsave(); > +} > + > /* > * The init_rcu_head_on_stack() and destroy_rcu_head_on_stack() calls > * are needed for dynamic initialization and destruction of rcu_head Massmi; afaict we also need the below. That is, when you stick an optimized kprobe in a region RCU is not watching, nothing will make RCU go. --- diff --git a/kernel/kprobes.c b/kernel/kprobes.c index 9ad5e6b346f8..fa14918613da 100644 --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -370,6 +370,7 @@ static bool kprobes_allow_optimization; */ void opt_pre_handler(struct kprobe *p, struct pt_regs *regs) { + int rcu_flags = trace_rcu_enter(); struct kprobe *kp; list_for_each_entry_rcu(kp, &p->list, list) { @@ -379,6 +380,7 @@ void opt_pre_handler(struct kprobe *p, struct pt_regs *regs) } reset_kprobe_instance(); } + trace_rcu_exit(rcu_flags); } NOKPROBE_SYMBOL(opt_pre_handler);