On Tue, 30 Apr 2024 at 07:46, Dave Chinner <david@xxxxxxxxxxxxx> wrote: > > From: Dave Chinner <dchinner@xxxxxxxxxx> > > The stackdepot code is used by KASAN and lockdep for recoding stack > traces. Both of these track allocation context information, and so > their internal allocations must obey the caller allocation contexts > to avoid generating their own false positive warnings that have > nothing to do with the code they are instrumenting/tracking. > > We also don't want recording stack traces to deplete emergency > memory reserves - debug code is useless if it creates new issues > that can't be replicated when the debug code is disabled. > > Switch the stackdepot allocation masking to use gfp_nested_mask() > to address these issues. gfp_nested_mask() also strips GFP_ZONEMASK > naturally, so that greatly simplifies this code. > > Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx> Reviewed-by: Marco Elver <elver@xxxxxxxxxx> > --- > lib/stackdepot.c | 11 ++--------- > 1 file changed, 2 insertions(+), 9 deletions(-) > > diff --git a/lib/stackdepot.c b/lib/stackdepot.c > index 68c97387aa54..0bbae49e6177 100644 > --- a/lib/stackdepot.c > +++ b/lib/stackdepot.c > @@ -624,15 +624,8 @@ depot_stack_handle_t stack_depot_save_flags(unsigned long *entries, > * we won't be able to do that under the lock. > */ > if (unlikely(can_alloc && !READ_ONCE(new_pool))) { > - /* > - * Zero out zone modifiers, as we don't have specific zone > - * requirements. Keep the flags related to allocation in atomic > - * contexts and I/O. > - */ > - alloc_flags &= ~GFP_ZONEMASK; > - alloc_flags &= (GFP_ATOMIC | GFP_KERNEL); > - alloc_flags |= __GFP_NOWARN; > - page = alloc_pages(alloc_flags, DEPOT_POOL_ORDER); > + page = alloc_pages(gfp_nested_mask(alloc_flags), > + DEPOT_POOL_ORDER); > if (page) > prealloc = page_address(page); > } > -- > 2.43.0 >