The patch titled Subject: mm/large system hash: use vmalloc for size > MAX_ORDER when !hashdist has been added to the -mm tree. Its filename is mm-large-system-hash-use-vmalloc-for-size-max_order-when-hashdist.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mm-large-system-hash-use-vmalloc-for-size-max_order-when-hashdist.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/mm-large-system-hash-use-vmalloc-for-size-max_order-when-hashdist.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Nicholas Piggin <npiggin@xxxxxxxxx> Subject: mm/large system hash: use vmalloc for size > MAX_ORDER when !hashdist The kernel currently clamps large system hashes to MAX_ORDER when hashdist is not set, which is rather arbitrary. vmalloc space is limited on 32-bit machines, but this shouldn't result in much more used because of small physical memory limiting system hash sizes. Include "vmalloc" or "linear" in the kernel log message. Link: http://lkml.kernel.org/r/20190605144814.29319-1-npiggin@xxxxxxxxx Signed-off-by: Nicholas Piggin <npiggin@xxxxxxxxx> Reviewed-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/page_alloc.c | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) --- a/mm/page_alloc.c~mm-large-system-hash-use-vmalloc-for-size-max_order-when-hashdist +++ a/mm/page_alloc.c @@ -7982,6 +7982,7 @@ void *__init alloc_large_system_hash(con unsigned long log2qty, size; void *table = NULL; gfp_t gfp_flags; + bool virt; /* allow the kernel cmdline to have a say */ if (!numentries) { @@ -8038,6 +8039,7 @@ void *__init alloc_large_system_hash(con gfp_flags = (flags & HASH_ZERO) ? GFP_ATOMIC | __GFP_ZERO : GFP_ATOMIC; do { + virt = false; size = bucketsize << log2qty; if (flags & HASH_EARLY) { if (flags & HASH_ZERO) @@ -8045,26 +8047,26 @@ void *__init alloc_large_system_hash(con else table = memblock_alloc_raw(size, SMP_CACHE_BYTES); - } else if (hashdist) { + } else if (get_order(size) >= MAX_ORDER || hashdist) { table = __vmalloc(size, gfp_flags, PAGE_KERNEL); + virt = true; } else { /* * If bucketsize is not a power-of-two, we may free * some pages at the end of hash table which * alloc_pages_exact() automatically does */ - if (get_order(size) < MAX_ORDER) { - table = alloc_pages_exact(size, gfp_flags); - kmemleak_alloc(table, size, 1, gfp_flags); - } + table = alloc_pages_exact(size, gfp_flags); + kmemleak_alloc(table, size, 1, gfp_flags); } } while (!table && size > PAGE_SIZE && --log2qty); if (!table) panic("Failed to allocate %s hash table\n", tablename); - pr_info("%s hash table entries: %ld (order: %d, %lu bytes)\n", - tablename, 1UL << log2qty, ilog2(size) - PAGE_SHIFT, size); + pr_info("%s hash table entries: %ld (order: %d, %lu bytes, %s)\n", + tablename, 1UL << log2qty, ilog2(size) - PAGE_SHIFT, size, + virt ? "vmalloc" : "linear"); if (_hash_shift) *_hash_shift = log2qty; _ Patches currently in -mm which might be from npiggin@xxxxxxxxx are mm-large-system-hash-use-vmalloc-for-size-max_order-when-hashdist.patch mm-large-system-hash-clear-hashdist-when-only-one-node-with-memory-is-booted.patch