[to-be-updated] mm-hugetlb-fix-hugetlb_cma_reserve-if-config_numa-isnt-set.patch removed from -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     Subject: mm: hugetlb: fix hugetlb_cma_reserve() if CONFIG_NUMA isn't set
has been removed from the -mm tree.  Its filename was
     mm-hugetlb-fix-hugetlb_cma_reserve-if-config_numa-isnt-set.patch

This patch was dropped because an updated version will be merged

------------------------------------------------------
From: Roman Gushchin <guro@xxxxxx>
Subject: mm: hugetlb: fix hugetlb_cma_reserve() if CONFIG_NUMA isn't set

If CONFIG_NUMA isn't set, there is no need to ensure that the hugetlb cma
area belongs to a specific numa node.

min/max_low_pfn can be used for limiting the maximum size of the
hugetlb_cma area.

Also for_each_mem_pfn_range() is defined only if
CONFIG_HAVE_MEMBLOCK_NODE_MAP is set, and on arm (unlike most other
architectures) it depends on CONFIG_NUMA.  This makes the build fail if
CONFIG_NUMA isn't set.

Link: http://lkml.kernel.org/r/20200318153424.3202304-1-guro@xxxxxx
Signed-off-by: Roman Gushchin <guro@xxxxxx>
Reported-by: Andreas Schaufler <andreas.schaufler@xxxxxx>
Cc: Michal Hocko <mhocko@xxxxxxxxxx>
Cc: Rik van Riel <riel@xxxxxxxxxxx>
Cc: Mike Kravetz <mike.kravetz@xxxxxxxxxx>
Cc: Guido Gnther <agx@xxxxxxxxxxx>
Cc: Naresh Kamboju <naresh.kamboju@xxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 mm/hugetlb.c |   11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

--- a/mm/hugetlb.c~mm-hugetlb-fix-hugetlb_cma_reserve-if-config_numa-isnt-set
+++ a/mm/hugetlb.c
@@ -5576,16 +5576,21 @@ void __init hugetlb_cma_reserve(int orde
 
 	reserved = 0;
 	for_each_node_state(nid, N_ONLINE) {
-		unsigned long start_pfn, end_pfn;
 		unsigned long min_pfn = 0, max_pfn = 0;
-		int res, i;
+		int res;
+#ifdef CONFIG_NUMA
+		unsigned long start_pfn, end_pfn;
+		int i;
 
 		for_each_mem_pfn_range(i, nid, &start_pfn, &end_pfn, NULL) {
 			if (!min_pfn)
 				min_pfn = start_pfn;
 			max_pfn = end_pfn;
 		}
-
+#else
+		min_pfn = min_low_pfn;
+		max_pfn = max_low_pfn;
+#endif
 		size = min(per_node, hugetlb_cma_size - reserved);
 		size = round_up(size, PAGE_SIZE << order);
 
_

Patches currently in -mm which might be from guro@xxxxxx are

mm-memcg-slab-introduce-mem_cgroup_from_obj.patch
mm-kmem-cleanup-__memcg_kmem_charge_memcg-arguments.patch
mm-kmem-cleanup-memcg_kmem_uncharge_memcg-arguments.patch
mm-kmem-rename-memcg_kmem_uncharge-into-memcg_kmem_uncharge_page.patch
mm-kmem-switch-to-nr_pages-in-__memcg_kmem_charge_memcg.patch
mm-memcg-slab-cache-page-number-in-memcg_uncharge_slab.patch
mm-kmem-rename-__memcg_kmem_uncharge_memcg-to-__memcg_kmem_uncharge.patch
mm-memcg-make-memoryoomgroup-tolerable-to-task-migration.patch




[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux