This is a note to let you know that I've just added the patch titled ftrace: Protect ftrace_graph_hash with ftrace_sync to the 5.4-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: ftrace-protect-ftrace_graph_hash-with-ftrace_sync.patch and it can be found in the queue-5.4 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. commit 3062c7d6c0ae324f90d1dd7f551a439d434b0bc7 Author: Steven Rostedt (VMware) <rostedt@xxxxxxxxxxx> Date: Wed Feb 5 09:20:32 2020 -0500 ftrace: Protect ftrace_graph_hash with ftrace_sync [ Upstream commit 54a16ff6f2e50775145b210bcd94d62c3c2af117 ] As function_graph tracer can run when RCU is not "watching", it can not be protected by synchronize_rcu() it requires running a task on each CPU before it can be freed. Calling schedule_on_each_cpu(ftrace_sync) needs to be used. Link: https://lore.kernel.org/r/20200205131110.GT2935@paulmck-ThinkPad-P72 Cc: stable@xxxxxxxxxxxxxxx Fixes: b9b0c831bed26 ("ftrace: Convert graph filter to use hash tables") Reported-by: "Paul E. McKenney" <paulmck@xxxxxxxxxx> Reviewed-by: Joel Fernandes (Google) <joel@xxxxxxxxxxxxxxxxx> Signed-off-by: Steven Rostedt (VMware) <rostedt@xxxxxxxxxxx> Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index d297a8bdc681a..407d8bf4ed93e 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -5378,8 +5378,15 @@ ftrace_graph_release(struct inode *inode, struct file *file) mutex_unlock(&graph_lock); - /* Wait till all users are no longer using the old hash */ - synchronize_rcu(); + /* + * We need to do a hard force of sched synchronization. + * This is because we use preempt_disable() to do RCU, but + * the function tracers can be called where RCU is not watching + * (like before user_exit()). We can not rely on the RCU + * infrastructure to do the synchronization, thus we must do it + * ourselves. + */ + schedule_on_each_cpu(ftrace_sync); free_ftrace_hash(old_hash); } diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index 08647723cfab9..a3c29d5fcc616 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -947,6 +947,7 @@ static inline int ftrace_graph_addr(struct ftrace_graph_ent *trace) * Have to open code "rcu_dereference_sched()" because the * function graph tracer can be called when RCU is not * "watching". + * Protected with schedule_on_each_cpu(ftrace_sync) */ hash = rcu_dereference_protected(ftrace_graph_hash, !preemptible()); @@ -999,6 +1000,7 @@ static inline int ftrace_graph_notrace_addr(unsigned long addr) * Have to open code "rcu_dereference_sched()" because the * function graph tracer can be called when RCU is not * "watching". + * Protected with schedule_on_each_cpu(ftrace_sync) */ notrace_hash = rcu_dereference_protected(ftrace_graph_notrace_hash, !preemptible());