On Thu, 18 Apr 2019, Andrey Ryabinin wrote: > On 4/18/19 11:41 AM, Thomas Gleixner wrote: > > Replace the indirection through struct stack_trace by using the storage > > array based interfaces. > > > > Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx> > > Acked-by: Dmitry Vyukov <dvyukov@xxxxxxxxxx> > > Cc: Andrey Ryabinin <aryabinin@xxxxxxxxxxxxx> > > Cc: Alexander Potapenko <glider@xxxxxxxxxx> > > Cc: kasan-dev@xxxxxxxxxxxxxxxx > > Cc: linux-mm@xxxxxxxxx > > Acked-by: Andrey Ryabinin <aryabinin@xxxxxxxxxxxxx> > > > > > static inline depot_stack_handle_t save_stack(gfp_t flags) > > { > > unsigned long entries[KASAN_STACK_DEPTH]; > > - struct stack_trace trace = { > > - .nr_entries = 0, > > - .entries = entries, > > - .max_entries = KASAN_STACK_DEPTH, > > - .skip = 0 > > - }; > > + unsigned int nr_entries; > > > > - save_stack_trace(&trace); > > - filter_irq_stacks(&trace); > > - > > - return depot_save_stack(&trace, flags); > > + nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 0); > > + nr_entries = filter_irq_stacks(entries, nr_entries); > > + return stack_depot_save(entries, nr_entries, flags); > > Suggestion for further improvement: > > stack_trace_save() shouldn't unwind beyond irq entry point so we wouldn't > need filter_irq_stacks(). Probably all call sites doesn't care about > random stack above irq entry point, so it doesn't make sense to spend > resources on unwinding non-irq stack from interrupt first an filtering > out it later. There are users which care about the full trace. Once we have cleaned up the whole architeture side, we can add core side filtering which allows to 1) replace the 'skip number of entries at the beginning 2) stop the trace when it reaches a certain point Right now, I don't want to change any of this until the whole mess is consolidated. Thanks, tglx