Patch "ftrace: do CPU checking after preemption disabled" has been added to the 5.14-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    ftrace: do CPU checking after preemption disabled

to the 5.14-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     ftrace-do-cpu-checking-after-preemption-disabled.patch
and it can be found in the queue-5.14 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit 881e9a510519ad50013073a98177867bcced40aa
Author: 王贇 <yun.wang@xxxxxxxxxxxxxxxxx>
Date:   Wed Oct 27 11:15:11 2021 +0800

    ftrace: do CPU checking after preemption disabled
    
    [ Upstream commit d33cc657372366a8959f099c619a208b4c5dc664 ]
    
    With CONFIG_DEBUG_PREEMPT we observed reports like:
    
      BUG: using smp_processor_id() in preemptible
      caller is perf_ftrace_function_call+0x6f/0x2e0
      CPU: 1 PID: 680 Comm: a.out Not tainted
      Call Trace:
       <TASK>
       dump_stack_lvl+0x8d/0xcf
       check_preemption_disabled+0x104/0x110
       ? optimize_nops.isra.7+0x230/0x230
       ? text_poke_bp_batch+0x9f/0x310
       perf_ftrace_function_call+0x6f/0x2e0
       ...
       __text_poke+0x5/0x620
       text_poke_bp_batch+0x9f/0x310
    
    This telling us the CPU could be changed after task is preempted, and
    the checking on CPU before preemption will be invalid.
    
    Since now ftrace_test_recursion_trylock() will help to disable the
    preemption, this patch just do the checking after trylock() to address
    the issue.
    
    Link: https://lkml.kernel.org/r/54880691-5fe2-33e7-d12f-1fa6136f5183@xxxxxxxxxxxxxxxxx
    
    CC: Steven Rostedt <rostedt@xxxxxxxxxxx>
    Cc: Guo Ren <guoren@xxxxxxxxxx>
    Cc: Ingo Molnar <mingo@xxxxxxxxxx>
    Cc: "James E.J. Bottomley" <James.Bottomley@xxxxxxxxxxxxxxxxxxxxx>
    Cc: Helge Deller <deller@xxxxxx>
    Cc: Michael Ellerman <mpe@xxxxxxxxxxxxxx>
    Cc: Benjamin Herrenschmidt <benh@xxxxxxxxxxxxxxxxxxx>
    Cc: Paul Mackerras <paulus@xxxxxxxxx>
    Cc: Paul Walmsley <paul.walmsley@xxxxxxxxxx>
    Cc: Palmer Dabbelt <palmer@xxxxxxxxxxx>
    Cc: Albert Ou <aou@xxxxxxxxxxxxxxxxx>
    Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
    Cc: Borislav Petkov <bp@xxxxxxxxx>
    Cc: "H. Peter Anvin" <hpa@xxxxxxxxx>
    Cc: Josh Poimboeuf <jpoimboe@xxxxxxxxxx>
    Cc: Jiri Kosina <jikos@xxxxxxxxxx>
    Cc: Miroslav Benes <mbenes@xxxxxxx>
    Cc: Petr Mladek <pmladek@xxxxxxxx>
    Cc: Joe Lawrence <joe.lawrence@xxxxxxxxxx>
    Cc: Masami Hiramatsu <mhiramat@xxxxxxxxxx>
    Cc: "Peter Zijlstra (Intel)" <peterz@xxxxxxxxxxxxx>
    Cc: Nicholas Piggin <npiggin@xxxxxxxxx>
    Cc: Jisheng Zhang <jszhang@xxxxxxxxxx>
    Reported-by: Abaci <abaci@xxxxxxxxxxxxxxxxx>
    Signed-off-by: Michael Wang <yun.wang@xxxxxxxxxxxxxxxxx>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@xxxxxxxxxxx>
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/kernel/trace/trace_event_perf.c b/kernel/trace/trace_event_perf.c
index 03be4435d103f..50cd5a1a7ab4a 100644
--- a/kernel/trace/trace_event_perf.c
+++ b/kernel/trace/trace_event_perf.c
@@ -441,13 +441,13 @@ perf_ftrace_function_call(unsigned long ip, unsigned long parent_ip,
 	if (!rcu_is_watching())
 		return;
 
-	if ((unsigned long)ops->private != smp_processor_id())
-		return;
-
 	bit = ftrace_test_recursion_trylock(ip, parent_ip);
 	if (bit < 0)
 		return;
 
+	if ((unsigned long)ops->private != smp_processor_id())
+		goto out;
+
 	event = container_of(ops, struct perf_event, ftrace_ops);
 
 	/*



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux