The patch titled Subject: mm/page_alloc.c: avoid allocating highmem pages via alloc_pages_exact[_nid] has been added to the -mm tree. Its filename is mm-page_allocc-avoid-allocating-highmem-pages-via-alloc_pages_exact.patch This patch should soon appear at https://ozlabs.org/~akpm/mmots/broken-out/mm-page_allocc-avoid-allocating-highmem-pages-via-alloc_pages_exact.patch and later at https://ozlabs.org/~akpm/mmotm/broken-out/mm-page_allocc-avoid-allocating-highmem-pages-via-alloc_pages_exact.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Miaohe Lin <linmiaohe@xxxxxxxxxx> Subject: mm/page_alloc.c: avoid allocating highmem pages via alloc_pages_exact[_nid] Don't use with __GFP_HIGHMEM because page_address() cannot represent highmem pages without kmap(). Newly allocated pages would leak as page_address() will return NULL for highmem pages here. But It works now because the callers do not specify __GFP_HIGHMEM now. Link: https://lkml.kernel.org/r/20210902121242.41607-6-linmiaohe@xxxxxxxxxx Signed-off-by: Miaohe Lin <linmiaohe@xxxxxxxxxx> Reviewed-by: David Hildenbrand <david@xxxxxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> Cc: Stephen Rothwell <sfr@xxxxxxxxxxxxxxxx> Cc: Vlastimil Babka <vbabka@xxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/page_alloc.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) --- a/mm/page_alloc.c~mm-page_allocc-avoid-allocating-highmem-pages-via-alloc_pages_exact +++ a/mm/page_alloc.c @@ -5604,8 +5604,8 @@ void *alloc_pages_exact(size_t size, gfp unsigned int order = get_order(size); unsigned long addr; - if (WARN_ON_ONCE(gfp_mask & __GFP_COMP)) - gfp_mask &= ~__GFP_COMP; + if (WARN_ON_ONCE(gfp_mask & (__GFP_COMP | __GFP_HIGHMEM))) + gfp_mask &= ~(__GFP_COMP | __GFP_HIGHMEM); addr = __get_free_pages(gfp_mask, order); return make_alloc_exact(addr, order, size); @@ -5629,8 +5629,8 @@ void * __meminit alloc_pages_exact_nid(i unsigned int order = get_order(size); struct page *p; - if (WARN_ON_ONCE(gfp_mask & __GFP_COMP)) - gfp_mask &= ~__GFP_COMP; + if (WARN_ON_ONCE(gfp_mask & (__GFP_COMP | __GFP_HIGHMEM))) + gfp_mask &= ~(__GFP_COMP | __GFP_HIGHMEM); p = alloc_pages_node(nid, gfp_mask, order); if (!p) _ Patches currently in -mm which might be from linmiaohe@xxxxxxxxxx are mm-page_allocc-remove-meaningless-vm_bug_on-in-pindex_to_order.patch mm-page_allocc-simplify-the-code-by-using-macro-k.patch mm-page_allocc-fix-obsolete-comment-in-free_pcppages_bulk.patch mm-page_allocc-use-helper-function-zone_spans_pfn.patch mm-page_allocc-avoid-allocating-highmem-pages-via-alloc_pages_exact.patch mm-page_isolation-fix-potential-missing-call-to-unset_migratetype_isolate.patch mm-page_isolation-guard-against-possible-putback-unisolated-page.patch mm-memory_hotplug-make-hwpoisoned-dirty-swapcache-pages-unmovable.patch mm-zsmallocc-close-race-window-between-zs_pool_dec_isolated-and-zs_unregister_migration.patch mm-zsmallocc-combine-two-atomic-ops-in-zs_pool_dec_isolated.patch