On 3/27/21 7:21 PM, Sergei Trofimovich wrote: > On !ARCH_SUPPORTS_DEBUG_PAGEALLOC (like ia64) debug_pagealloc=1 > implies page_poison=on: > > if (page_poisoning_enabled() || > (!IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC) && > debug_pagealloc_enabled())) > static_branch_enable(&_page_poisoning_enabled); > > page_poison=on needs to init_on_free=1. > > Before the change id happened too late for the following case: > - have PAGE_POISONING=y > - have page_poison unset > - have !ARCH_SUPPORTS_DEBUG_PAGEALLOC arch (like ia64) > - have init_on_free=1 > - have debug_pagealloc=1 > > That way we get both keys enabled: > - static_branch_enable(&init_on_free); > - static_branch_enable(&_page_poisoning_enabled); > > which leads to poisoned pages returned for __GFP_ZERO pages. Good catch, thanks for finding the root cause! > After the change we execute only: > - static_branch_enable(&_page_poisoning_enabled); > and ignore init_on_free=1. > CC: Vlastimil Babka <vbabka@xxxxxxx> > CC: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> > CC: linux-mm@xxxxxxxxx > CC: David Hildenbrand <david@xxxxxxxxxx> > CC: Andrey Konovalov <andreyknvl@xxxxxxxxx> > Link: https://lkml.org/lkml/2021/3/26/443 > Signed-off-by: Sergei Trofimovich <slyfox@xxxxxxxxxx> Acked-by: Vlastimil Babka <vbabka@xxxxxxx> 8db26a3d4735 ("mm, page_poison: use static key more efficiently") Cc: <stable@xxxxxxxxxxxxxxx> > --- > mm/page_alloc.c | 30 +++++++++++++++++------------- > 1 file changed, 17 insertions(+), 13 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index d57d9b4f7089..10a8a1d28c11 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -764,32 +764,36 @@ static inline void clear_page_guard(struct zone *zone, struct page *page, > */ > void init_mem_debugging_and_hardening(void) > { > + bool page_poison_requested = page_poisoning_enabled(); > + > +#ifdef CONFIG_PAGE_POISONING > + /* > + * Page poisoning is debug page alloc for some arches. If > + * either of those options are enabled, enable poisoning. > + */ > + if (page_poisoning_enabled() || > + (!IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC) && > + debug_pagealloc_enabled())) { > + static_branch_enable(&_page_poisoning_enabled); > + page_poison_requested = true; > + } > +#endif > + > if (_init_on_alloc_enabled_early) { > - if (page_poisoning_enabled()) > + if (page_poison_requested) > pr_info("mem auto-init: CONFIG_PAGE_POISONING is on, " > "will take precedence over init_on_alloc\n"); > else > static_branch_enable(&init_on_alloc); > } > if (_init_on_free_enabled_early) { > - if (page_poisoning_enabled()) > + if (page_poison_requested) > pr_info("mem auto-init: CONFIG_PAGE_POISONING is on, " > "will take precedence over init_on_free\n"); > else > static_branch_enable(&init_on_free); > } > > -#ifdef CONFIG_PAGE_POISONING > - /* > - * Page poisoning is debug page alloc for some arches. If > - * either of those options are enabled, enable poisoning. > - */ > - if (page_poisoning_enabled() || > - (!IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC) && > - debug_pagealloc_enabled())) > - static_branch_enable(&_page_poisoning_enabled); > -#endif > - > #ifdef CONFIG_DEBUG_PAGEALLOC > if (!debug_pagealloc_enabled()) > return; >