[to-be-updated] mm-hugetlb-optionally-allocate-gigantic-hugepages-using-cma-fix-2.patch removed from -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     Subject: mm: hugetlb: fix per-node size calculation for hugetlb_cma
has been removed from the -mm tree.  Its filename was
     mm-hugetlb-optionally-allocate-gigantic-hugepages-using-cma-fix-2.patch

This patch was dropped because an updated version will be merged

------------------------------------------------------
From: Roman Gushchin <guro@xxxxxx>
Subject: mm: hugetlb: fix per-node size calculation for hugetlb_cma

Aslan found a bug in the per-node hugetlb_cma area size calculation:
the total remaining size should cap the per-node area size instead
of be the minimal possible allocation.

Without the fix:
[    0.004136] hugetlb_cma: reserve 2048 MiB, up to 1024 MiB per node
[    0.004138] cma: Reserved 2048 MiB at 0x0000000180000000
[    0.004139] hugetlb_cma: reserved 2048 MiB on node 0

With the fix:
[    0.006780] hugetlb_cma: reserve 2048 MiB, up to 1024 MiB per node
[    0.006786] cma: Reserved 1024 MiB at 0x00000001c0000000
[    0.006787] hugetlb_cma: reserved 1024 MiB on node 0
[    0.006788] cma: Reserved 1024 MiB at 0x00000003c0000000
[    0.006789] hugetlb_cma: reserved 1024 MiB on node 1

Link: http://lkml.kernel.org/r/20200323233411.2407279-1-guro@xxxxxx
Signed-off-by: Roman Gushchin <guro@xxxxxx>
Reported-by: Aslan Bakirov <aslan@xxxxxx>
Cc: Michal Hocko <mhocko@xxxxxxxxxx>
Cc: Rik van Riel <riel@xxxxxxxxxxx>
Cc: Andreas Schaufler <andreas.schaufler@xxxxxx>
Cc: Mike Kravetz <mike.kravetz@xxxxxxxxxx>
Cc: Joonsoo Kim <js1304@xxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 mm/hugetlb.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/mm/hugetlb.c~mm-hugetlb-optionally-allocate-gigantic-hugepages-using-cma-fix-2
+++ a/mm/hugetlb.c
@@ -5583,7 +5583,7 @@ void __init hugetlb_cma_reserve(int orde
 			max_pfn = end_pfn;
 		}
 
-		size = max(per_node, hugetlb_cma_size - reserved);
+		size = min(per_node, hugetlb_cma_size - reserved);
 		size = round_up(size, PAGE_SIZE << order);
 
 		if (size > ((max_pfn - min_pfn) << PAGE_SHIFT) / 2) {
_

Patches currently in -mm which might be from guro@xxxxxx are

mmpage_alloccma-conditionally-prefer-cma-pageblocks-for-movable-allocations.patch
mmpage_alloccma-conditionally-prefer-cma-pageblocks-for-movable-allocations-fix.patch




[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux