On Wed, Feb 05, 2020 at 05:21:11PM -0500, Steven Rostedt wrote: > From: "Steven Rostedt (VMware)" <rostedt@xxxxxxxxxxx> > > As function_graph tracer can run when RCU is not "watching", it can not be > protected by synchronize_rcu() it requires running a task on each CPU before > it can be freed. Calling schedule_on_each_cpu(ftrace_sync) needs to be used. > > Link: https://lore.kernel.org/r/20200205131110.GT2935@paulmck-ThinkPad-P72 > > Cc: stable@xxxxxxxxxxxxxxx > Fixes: b9b0c831bed26 ("ftrace: Convert graph filter to use hash tables") > Reported-by: "Paul E. McKenney" <paulmck@xxxxxxxxxx> > Reviewed-by: Joel Fernandes (Google) <joel@xxxxxxxxxxxxxxxxx> > Signed-off-by: Steven Rostedt (VMware) <rostedt@xxxxxxxxxxx> Nice! If there is much more call for this, perhaps I should take a hint from the ftrace_sync() comment and add synchronize_rcu_rude(). ;-) Reviewed-by: "Paul E. McKenney" <paulmck@xxxxxxxxxx> > --- > kernel/trace/ftrace.c | 11 +++++++++-- > kernel/trace/trace.h | 2 ++ > 2 files changed, 11 insertions(+), 2 deletions(-) > > diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c > index 481ede3eac13..3f7ee102868a 100644 > --- a/kernel/trace/ftrace.c > +++ b/kernel/trace/ftrace.c > @@ -5867,8 +5867,15 @@ ftrace_graph_release(struct inode *inode, struct file *file) > > mutex_unlock(&graph_lock); > > - /* Wait till all users are no longer using the old hash */ > - synchronize_rcu(); > + /* > + * We need to do a hard force of sched synchronization. > + * This is because we use preempt_disable() to do RCU, but > + * the function tracers can be called where RCU is not watching > + * (like before user_exit()). We can not rely on the RCU > + * infrastructure to do the synchronization, thus we must do it > + * ourselves. > + */ > + schedule_on_each_cpu(ftrace_sync); > > free_ftrace_hash(old_hash); > } > diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h > index 8c52f5de9384..3c75d29bd861 100644 > --- a/kernel/trace/trace.h > +++ b/kernel/trace/trace.h > @@ -979,6 +979,7 @@ static inline int ftrace_graph_addr(struct ftrace_graph_ent *trace) > * Have to open code "rcu_dereference_sched()" because the > * function graph tracer can be called when RCU is not > * "watching". > + * Protected with schedule_on_each_cpu(ftrace_sync) > */ > hash = rcu_dereference_protected(ftrace_graph_hash, !preemptible()); > > @@ -1031,6 +1032,7 @@ static inline int ftrace_graph_notrace_addr(unsigned long addr) > * Have to open code "rcu_dereference_sched()" because the > * function graph tracer can be called when RCU is not > * "watching". > + * Protected with schedule_on_each_cpu(ftrace_sync) > */ > notrace_hash = rcu_dereference_protected(ftrace_graph_notrace_hash, > !preemptible()); > -- > 2.24.1 > >