On Thu, 18 Apr 2019 10:41:40 +0200 Thomas Gleixner <tglx@xxxxxxxxxxxxx> wrote: > The per cpu stack trace buffer usage pattern is odd at best. The buffer has > place for 512 stack trace entries on 64-bit and 1024 on 32-bit. When > interrupts or exceptions nest after the per cpu buffer was acquired the > stacktrace length is hardcoded to 8 entries. 512/1024 stack trace entries > in kernel stacks are unrealistic so the buffer is a complete waste. > > Split the buffer into chunks of 64 stack entries which is plenty. This > allows nesting contexts (interrupts, exceptions) to utilize the cpu buffer > for stack retrieval and avoids the fixed length allocation along with the > conditional execution pathes. > > Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx> > Cc: Steven Rostedt <rostedt@xxxxxxxxxxx> > --- > kernel/trace/trace.c | 77 +++++++++++++++++++++++++-------------------------- > 1 file changed, 39 insertions(+), 38 deletions(-) > > --- a/kernel/trace/trace.c > +++ b/kernel/trace/trace.c > @@ -2749,12 +2749,21 @@ trace_function(struct trace_array *tr, > > #ifdef CONFIG_STACKTRACE > > -#define FTRACE_STACK_MAX_ENTRIES (PAGE_SIZE / sizeof(unsigned long)) > +/* 64 entries for kernel stacks are plenty */ > +#define FTRACE_KSTACK_ENTRIES 64 > + > struct ftrace_stack { > - unsigned long calls[FTRACE_STACK_MAX_ENTRIES]; > + unsigned long calls[FTRACE_KSTACK_ENTRIES]; > }; > > -static DEFINE_PER_CPU(struct ftrace_stack, ftrace_stack); > +/* This allows 8 level nesting which is plenty */ Can we make this 4 level nesting and increase the size? (I can see us going more than 64 deep, kernel developers never cease to amaze me ;-) That's all we need: Context: Normal, softirq, irq, NMI Is there any other way to nest? -- Steve > +#define FTRACE_KSTACK_NESTING (PAGE_SIZE / sizeof(struct ftrace_stack)) > + > +struct ftrace_stacks { > + struct ftrace_stack stacks[FTRACE_KSTACK_NESTING]; > +}; > + > +static DEFINE_PER_CPU(struct ftrace_stacks, ftrace_stacks); > static DEFINE_PER_CPU(int, ftrace_stack_reserve); > >