On Tue, Nov 22, 2022, at 20:53, Nadav Amit wrote: > From: Nadav Amit <namit@xxxxxxxxxx> > > Functions that are marked as "inline" are currently also not tracable. > Apparently, this has been done to prevent differences between different > configs that caused different functions to be tracable on different > platforms. > > Anyhow, this consideration is not very strong, and tying "inline" and > "notrace" does not seem very beneficial. The "inline" keyword is just a > hint, and many functions are currently not tracable due to this reason. The original reason was listed in 93b3cca1ccd3 ("ftrace: Make all inline tags also include notrace"), which describes Commit 5963e317b1e9d2a ("ftrace/x86: Do not change stacks in DEBUG when calling lockdep") prevented lockdep calls from the int3 breakpoint handler from reseting the stack if a function that was called was in the process of being converted for tracing and had a breakpoint on it. The idea is, before calling the lockdep code, do a load_idt() to the special IDT that kept the breakpoint stack from reseting. This worked well as a quick fix for this kernel release, until a certain config caused a lockup in the function tracer start up tests. Investigating it, I found that the load_idt that was used to prevent the int3 from changing stacks was itself being traced! and this sounds like a much stronger reason than what you describe, and I would expect your change to cause regressions in similar places. It's possible that the right answer is that the affected functions should be marked as __always_inline. Arnd