The page-owner tracking code records stack traces during page allocation. To do this, it must do a memory allocation for the stack information from inside an existing memory allocation context. This internal allocation must obey the high level caller allocation constraints to avoid generating false positive warnings that have nothing to do with the code they are instrumenting/tracking (e.g. through lockdep reclaim state tracking) We also don't want recording stack traces to deplete emergency memory reserves - debug code is useless if it creates new issues that can't be replicated when the debug code is disabled. Switch the stack tracking allocation masking to use gfp_nested_mask() to address these issues. gfp_nested_mask() naturally strips GFP_ZONEMASK, too, which greatly simplifies this code. Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx> --- mm/page_owner.c | 7 +------ 1 file changed, 1 insertion(+), 6 deletions(-) diff --git a/mm/page_owner.c b/mm/page_owner.c index 742f432e5bf0..55e89c34f0cd 100644 --- a/mm/page_owner.c +++ b/mm/page_owner.c @@ -168,13 +168,8 @@ static void add_stack_record_to_list(struct stack_record *stack_record, unsigned long flags; struct stack *stack; - /* Filter gfp_mask the same way stackdepot does, for consistency */ - gfp_mask &= ~GFP_ZONEMASK; - gfp_mask &= (GFP_ATOMIC | GFP_KERNEL); - gfp_mask |= __GFP_NOWARN; - set_current_in_page_owner(); - stack = kmalloc(sizeof(*stack), gfp_mask); + stack = kmalloc(sizeof(*stack), gfp_nested_mask(gfp_mask)); if (!stack) { unset_current_in_page_owner(); return; -- 2.43.0