Patch "ftrace: Protect ftrace_graph_hash with ftrace_sync" has been added to the 5.5-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    ftrace: Protect ftrace_graph_hash with ftrace_sync

to the 5.5-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     ftrace-protect-ftrace_graph_hash-with-ftrace_sync.patch
and it can be found in the queue-5.5 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit b9e00433bbd09bfa2baf704e02eb3e817d81b36b
Author: Steven Rostedt (VMware) <rostedt@xxxxxxxxxxx>
Date:   Wed Feb 5 09:20:32 2020 -0500

    ftrace: Protect ftrace_graph_hash with ftrace_sync
    
    [ Upstream commit 54a16ff6f2e50775145b210bcd94d62c3c2af117 ]
    
    As function_graph tracer can run when RCU is not "watching", it can not be
    protected by synchronize_rcu() it requires running a task on each CPU before
    it can be freed. Calling schedule_on_each_cpu(ftrace_sync) needs to be used.
    
    Link: https://lore.kernel.org/r/20200205131110.GT2935@paulmck-ThinkPad-P72
    
    Cc: stable@xxxxxxxxxxxxxxx
    Fixes: b9b0c831bed26 ("ftrace: Convert graph filter to use hash tables")
    Reported-by: "Paul E. McKenney" <paulmck@xxxxxxxxxx>
    Reviewed-by: Joel Fernandes (Google) <joel@xxxxxxxxxxxxxxxxx>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@xxxxxxxxxxx>
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index e85668cdd8c73..3581bd96d6eb3 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -5872,8 +5872,15 @@ ftrace_graph_release(struct inode *inode, struct file *file)
 
 		mutex_unlock(&graph_lock);
 
-		/* Wait till all users are no longer using the old hash */
-		synchronize_rcu();
+		/*
+		 * We need to do a hard force of sched synchronization.
+		 * This is because we use preempt_disable() to do RCU, but
+		 * the function tracers can be called where RCU is not watching
+		 * (like before user_exit()). We can not rely on the RCU
+		 * infrastructure to do the synchronization, thus we must do it
+		 * ourselves.
+		 */
+		schedule_on_each_cpu(ftrace_sync);
 
 		free_ftrace_hash(old_hash);
 	}
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index b769638f005c7..85f475bb48238 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -965,6 +965,7 @@ static inline int ftrace_graph_addr(struct ftrace_graph_ent *trace)
 	 * Have to open code "rcu_dereference_sched()" because the
 	 * function graph tracer can be called when RCU is not
 	 * "watching".
+	 * Protected with schedule_on_each_cpu(ftrace_sync)
 	 */
 	hash = rcu_dereference_protected(ftrace_graph_hash, !preemptible());
 
@@ -1017,6 +1018,7 @@ static inline int ftrace_graph_notrace_addr(unsigned long addr)
 	 * Have to open code "rcu_dereference_sched()" because the
 	 * function graph tracer can be called when RCU is not
 	 * "watching".
+	 * Protected with schedule_on_each_cpu(ftrace_sync)
 	 */
 	notrace_hash = rcu_dereference_protected(ftrace_graph_notrace_hash,
 						 !preemptible());



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux