On Tue, 12 Oct 2021 13:40:08 +0800 王贇 <yun.wang@xxxxxxxxxxxxxxxxx> wrote: > --- a/include/linux/trace_recursion.h > +++ b/include/linux/trace_recursion.h > @@ -214,7 +214,14 @@ static __always_inline void trace_clear_recursion(int bit) > static __always_inline int ftrace_test_recursion_trylock(unsigned long ip, > unsigned long parent_ip) > { > - return trace_test_and_set_recursion(ip, parent_ip, TRACE_FTRACE_START, TRACE_FTRACE_MAX); > + int bit; > + > + preempt_disable_notrace(); The recursion test does not require preemption disabled, it uses the task struct, not per_cpu variables, so you should not disable it before the test. bit = trace_test_and_set_recursion(ip, parent_ip, TRACE_FTRACE_START, TRACE_FTRACE_MAX); if (bit >= 0) preempt_disable_notrace(); And if the bit is zero, it means a recursion check was already done by another caller (ftrace handler does the check, followed by calling perf), and you really don't even need to disable preemption in that case. if (bit > 0) preempt_disable_notrace(); And on the unlock, have: static __always_inline void ftrace_test_recursion_unlock(int bit) { if (bit) preempt_enable_notrace(); trace_clear_recursion(bit); } But maybe that's over optimizing ;-) -- Steve > + bit = trace_test_and_set_recursion(ip, parent_ip, TRACE_FTRACE_START, TRACE_FTRACE_MAX); > + if (bit < 0) > + preempt_enable_notrace(); > + > + return bit; > }