On 2021/10/12 下午8:43, Steven Rostedt wrote: > On Tue, 12 Oct 2021 13:40:08 +0800 > 王贇 <yun.wang@xxxxxxxxxxxxxxxxx> wrote: > >> --- a/include/linux/trace_recursion.h >> +++ b/include/linux/trace_recursion.h >> @@ -214,7 +214,14 @@ static __always_inline void trace_clear_recursion(int bit) >> static __always_inline int ftrace_test_recursion_trylock(unsigned long ip, >> unsigned long parent_ip) >> { >> - return trace_test_and_set_recursion(ip, parent_ip, TRACE_FTRACE_START, TRACE_FTRACE_MAX); >> + int bit; >> + >> + preempt_disable_notrace(); > > The recursion test does not require preemption disabled, it uses the task > struct, not per_cpu variables, so you should not disable it before the test. > > bit = trace_test_and_set_recursion(ip, parent_ip, TRACE_FTRACE_START, TRACE_FTRACE_MAX); > if (bit >= 0) > preempt_disable_notrace(); > > And if the bit is zero, it means a recursion check was already done by > another caller (ftrace handler does the check, followed by calling perf), > and you really don't even need to disable preemption in that case. > > if (bit > 0) > preempt_disable_notrace(); > > And on the unlock, have: > > static __always_inline void ftrace_test_recursion_unlock(int bit) > { > if (bit) > preempt_enable_notrace(); > trace_clear_recursion(bit); > } > > But maybe that's over optimizing ;-) I see, while the user can still check smp_processor_id() after trylock return bit 0... I guess Peter's point at very beginning is to prevent such cases, since kernel for production will not have preemption debug on, and such issue won't get report but could cause trouble which really hard to trace down , way to eliminate such issue once for all sounds attractive, isn't it? Regards, Michael Wang > > -- Steve > > >> + bit = trace_test_and_set_recursion(ip, parent_ip, TRACE_FTRACE_START, TRACE_FTRACE_MAX); >> + if (bit < 0) >> + preempt_enable_notrace(); >> + >> + return bit; >> }