The patch titled Subject: mm: hugetlb: fix hugetlb_cma_reserve() if CONFIG_NUMA isn't set has been added to the -mm tree. Its filename is mm-hugetlb-fix-hugetlb_cma_reserve-if-config_numa-isnt-set.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mm-hugetlb-fix-hugetlb_cma_reserve-if-config_numa-isnt-set.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/mm-hugetlb-fix-hugetlb_cma_reserve-if-config_numa-isnt-set.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Roman Gushchin <guro@xxxxxx> Subject: mm: hugetlb: fix hugetlb_cma_reserve() if CONFIG_NUMA isn't set If CONFIG_NUMA isn't set, there is no need to ensure that the hugetlb cma area belongs to a specific numa node. min/max_low_pfn can be used for limiting the maximum size of the hugetlb_cma area. Also for_each_mem_pfn_range() is defined only if CONFIG_HAVE_MEMBLOCK_NODE_MAP is set, and on arm (unlike most other architectures) it depends on CONFIG_NUMA. This makes the build fail if CONFIG_NUMA isn't set. Link: http://lkml.kernel.org/r/20200318153424.3202304-1-guro@xxxxxx Signed-off-by: Roman Gushchin <guro@xxxxxx> Reported-by: Andreas Schaufler <andreas.schaufler@xxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxxxx> Cc: Rik van Riel <riel@xxxxxxxxxxx> Cc: Mike Kravetz <mike.kravetz@xxxxxxxxxx> Cc: Guido Gnther <agx@xxxxxxxxxxx> Cc: Naresh Kamboju <naresh.kamboju@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/hugetlb.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) --- a/mm/hugetlb.c~mm-hugetlb-fix-hugetlb_cma_reserve-if-config_numa-isnt-set +++ a/mm/hugetlb.c @@ -5441,16 +5441,21 @@ void __init hugetlb_cma_reserve(int orde reserved = 0; for_each_node_state(nid, N_ONLINE) { - unsigned long start_pfn, end_pfn; unsigned long min_pfn = 0, max_pfn = 0; - int res, i; + int res; +#ifdef CONFIG_NUMA + unsigned long start_pfn, end_pfn; + int i; for_each_mem_pfn_range(i, nid, &start_pfn, &end_pfn, NULL) { if (!min_pfn) min_pfn = start_pfn; max_pfn = end_pfn; } - +#else + min_pfn = min_low_pfn; + max_pfn = max_low_pfn; +#endif size = max(per_node, hugetlb_cma_size - reserved); size = round_up(size, PAGE_SIZE << order); _ Patches currently in -mm which might be from guro@xxxxxx are mm-fork-fix-kernel_stack-memcg-stats-for-various-stack-implementations.patch mm-memcg-slab-introduce-mem_cgroup_from_obj.patch mm-memcg-slab-introduce-mem_cgroup_from_obj-v2.patch mm-kmem-cleanup-__memcg_kmem_charge_memcg-arguments.patch mm-kmem-cleanup-memcg_kmem_uncharge_memcg-arguments.patch mm-kmem-rename-memcg_kmem_uncharge-into-memcg_kmem_uncharge_page.patch mm-kmem-switch-to-nr_pages-in-__memcg_kmem_charge_memcg.patch mm-memcg-slab-cache-page-number-in-memcg_uncharge_slab.patch mm-kmem-rename-__memcg_kmem_uncharge_memcg-to-__memcg_kmem_uncharge.patch mmpage_alloccma-conditionally-prefer-cma-pageblocks-for-movable-allocations.patch mmpage_alloccma-conditionally-prefer-cma-pageblocks-for-movable-allocations-fix.patch mm-hugetlb-optionally-allocate-gigantic-hugepages-using-cma.patch mm-hugetlb-optionally-allocate-gigantic-hugepages-using-cma-fix.patch mm-hugetlb-fix-hugetlb_cma_reserve-if-config_numa-isnt-set.patch