On Mon, 14 Oct 2024 11:58:25 +0100 Ryan Roberts <ryan.roberts@xxxxxxx> wrote: > To prepare for supporting boot-time page size selection, refactor code > to remove assumptions about PAGE_SIZE being compile-time constant. Code > intended to be equivalent when compile-time page size is active. > > Convert BUILD_BUG_ON() BUG_ON() since the argument depends on PAGE_SIZE > and its not trivial to test against a page size limit. > > Redefine FTRACE_KSTACK_ENTRIES so that "struct ftrace_stacks" is always > sized at 32K for 64-bit and 16K for 32-bit. It was previously defined in > terms of PAGE_SIZE (and worked out at the quoted sizes for a 4K page > size). But for 64K pages, the size expanded to 512K. Given the ftrace > stacks should be invariant to page size, this seemed like a waste. As a > side effect, it removes the PAGE_SIZE compile-time constant assumption > from this code. > > Signed-off-by: Ryan Roberts <ryan.roberts@xxxxxxx> > --- > > ***NOTE*** > Any confused maintainers may want to read the cover note here for context: > https://lore.kernel.org/all/20241014105514.3206191-1-ryan.roberts@xxxxxxx/ > > kernel/trace/fgraph.c | 2 +- > kernel/trace/trace.c | 2 +- > 2 files changed, 2 insertions(+), 2 deletions(-) > > diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c > index d7d4fb403f6f0..47aa5c8d8090e 100644 > --- a/kernel/trace/fgraph.c > +++ b/kernel/trace/fgraph.c > @@ -534,7 +534,7 @@ ftrace_push_return_trace(unsigned long ret, unsigned long func, > if (!current->ret_stack) > return -EBUSY; > > - BUILD_BUG_ON(SHADOW_STACK_SIZE % sizeof(long)); > + BUG_ON(SHADOW_STACK_SIZE % sizeof(long)); Absolutely not! BUG_ON() is in no way a substitution of any BUILD_BUG_ON(). BUILD_BUG_ON() is a non intrusive way to see if something isn't lined up correctly, and can fix it before you execute any code. BUG_ON() is the most intrusive way to say something is wrong and you crash the system. Not to mention, when function graph tracing is enabled, this gets triggered for *every* function call! So I do not want any runtime test done. Every nanosecond counts in this code path. If anything, this needs to be moved to initialization and checked once, if it fails, gives a WARN_ON() and disables function graph tracing. -- Steve > > /* Set val to "reserved" with the delta to the new fgraph frame */ > val = (FGRAPH_TYPE_RESERVED << FGRAPH_TYPE_SHIFT) | FGRAPH_FRAME_OFFSET; > diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c > index c3b2c7dfadef1..0f2ec3d30579f 100644 > --- a/kernel/trace/trace.c > +++ b/kernel/trace/trace.c > @@ -2887,7 +2887,7 @@ trace_function(struct trace_array *tr, unsigned long ip, unsigned long > /* Allow 4 levels of nesting: normal, softirq, irq, NMI */ > #define FTRACE_KSTACK_NESTING 4 > > -#define FTRACE_KSTACK_ENTRIES (PAGE_SIZE / FTRACE_KSTACK_NESTING) > +#define FTRACE_KSTACK_ENTRIES (SZ_4K / FTRACE_KSTACK_NESTING) > > struct ftrace_stack { > unsigned long calls[FTRACE_KSTACK_ENTRIES];