The patch titled Subject: mm, page_alloc: reduce static keys in prep_new_page() has been removed from the -mm tree. Its filename was mm-page_alloc-reduce-static-keys-in-prep_new_page.patch This patch was dropped because an updated version will be merged ------------------------------------------------------ From: Vlastimil Babka <vbabka@xxxxxxx> Subject: mm, page_alloc: reduce static keys in prep_new_page() prep_new_page() will always zero a new page (regardless of __GFP_ZERO) when init_on_alloc is enabled, but will also always skip zeroing if the page was already zeroed on free by init_on_free or page poisoning. The latter check implemented by free_pages_prezeroed() can involve two different static keys. As prep_new_page() is really a hot path, let's introduce a single static key free_pages_not_prezeroed for this purpose and initialize it in init_mem_debugging(). Link: https://lkml.kernel.org/r/20201026173358.14704-4-vbabka@xxxxxxx Signed-off-by: Vlastimil Babka <vbabka@xxxxxxx> Cc: Alexander Potapenko <glider@xxxxxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Cc: Kees Cook <keescook@xxxxxxxxxxxx> Cc: Mateusz Nosek <mateusznosek0@xxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxxxx> Cc: Mike Rapoport <rppt@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/page_alloc.c | 21 ++++++++++++++------- 1 file changed, 14 insertions(+), 7 deletions(-) --- a/mm/page_alloc.c~mm-page_alloc-reduce-static-keys-in-prep_new_page +++ a/mm/page_alloc.c @@ -171,6 +171,8 @@ EXPORT_SYMBOL(init_on_alloc); DEFINE_STATIC_KEY_FALSE_RO(init_on_free); EXPORT_SYMBOL(init_on_free); +static DEFINE_STATIC_KEY_TRUE_RO(free_pages_not_prezeroed); + static bool _init_on_alloc_enabled_early __read_mostly = IS_ENABLED(CONFIG_INIT_ON_ALLOC_DEFAULT_ON); static int __init early_init_on_alloc(char *buf) @@ -777,6 +779,16 @@ void init_mem_debugging(void) } } + /* + * We have a special static key that controls whether prep_new_page will + * never need to zero the page. This mode is enabled when page is + * already zeroed by init_on_free or page_poisoning zero mode. + */ + if (_init_on_free_enabled_early || + (IS_ENABLED(CONFIG_PAGE_POISONING_ZERO) + && page_poisoning_enabled())) + static_branch_disable(&free_pages_not_prezeroed); + #ifdef CONFIG_PAGE_POISONING /* * Page poisoning is debug page alloc for some arches. If @@ -2216,12 +2228,6 @@ static inline int check_new_page(struct return 1; } -static inline bool free_pages_prezeroed(void) -{ - return (IS_ENABLED(CONFIG_PAGE_POISONING_ZERO) && - page_poisoning_enabled_static()) || want_init_on_free(); -} - #ifdef CONFIG_DEBUG_VM /* * With DEBUG_VM enabled, order-0 pages are checked for expected state when @@ -2291,7 +2297,8 @@ static void prep_new_page(struct page *p { post_alloc_hook(page, order, gfp_flags); - if (!free_pages_prezeroed() && want_init_on_alloc(gfp_flags)) + if (static_branch_likely(&free_pages_not_prezeroed) + && want_init_on_alloc(gfp_flags)) kernel_init_free_pages(page, 1 << order); if (order && (gfp_flags & __GFP_COMP)) _ Patches currently in -mm which might be from vbabka@xxxxxxx are mm-slub-use-kmem_cache_debug_flags-in-deactivate_slab.patch