In the ARM32 environment with highmem enabled. Using flag of kmalloc() with __GFP_HIGHMEM to allocate large memory, it will go through the kmalloc_order() path and return NULL. The __GFP_HIGHMEM flag will cause alloc_pages() to allocate highmem memory and pages cannot be directly converted to a virtual address, kmalloc_order() will return NULL and the page has been allocated. After modification, GFP_SLAB_BUG_MASK has been checked before allocating pages, refer to the new_slab(). Signed-off-by: Long Li <lonuxli.64@xxxxxxxxx> --- Changes in v2: - patch is rebased againest "[PATCH] mm: Free unused pages in kmalloc_order()" [1] - check GFP_SLAB_BUG_MASK and generate warnings before alloc_pages in kmalloc_order() [1] https://lkml.org/lkml/2020/6/27/16 mm/slab_common.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/mm/slab_common.c b/mm/slab_common.c index a143a8c8f874..3548f4f8374b 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -27,6 +27,7 @@ #include <trace/events/kmem.h> #include "slab.h" +#include "internal.h" enum slab_state slab_state; LIST_HEAD(slab_caches); @@ -815,6 +816,15 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order) void *ret = NULL; struct page *page; + if (unlikely(flags & GFP_SLAB_BUG_MASK)) { + gfp_t invalid_mask = flags & GFP_SLAB_BUG_MASK; + + flags &= ~GFP_SLAB_BUG_MASK; + pr_warn("Unexpected gfp: %#x (%pGg). Fixing up to gfp: %#x (%pGg). Fix your code!\n", + invalid_mask, &invalid_mask, flags, &flags); + dump_stack(); + } + flags |= __GFP_COMP; page = alloc_pages(flags, order); if (likely(page)) { -- 2.17.1