+ mm-hugetlb-yield-when-prepping-struct-pages.patch added to -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     Subject: mm: hugetlb: yield when prepping struct pages
has been added to the -mm tree.  Its filename is
     mm-hugetlb-yield-when-prepping-struct-pages.patch

This patch should soon appear at
    http://ozlabs.org/~akpm/mmots/broken-out/mm-hugetlb-yield-when-prepping-struct-pages.patch
and later at
    http://ozlabs.org/~akpm/mmotm/broken-out/mm-hugetlb-yield-when-prepping-struct-pages.patch

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***

The -mm tree is included into linux-next and is updated
there every 3-4 working days

------------------------------------------------------
From: Cannon Matthews <cannonmatthews@xxxxxxxxxx>
Subject: mm: hugetlb: yield when prepping struct pages

When booting with very large numbers of gigantic (i.e.  1G) pages, the
operations in the loop of gather_bootmem_prealloc, and specifically
prep_compound_gigantic_page, takes a very long time, and can cause a
softlockup if enough pages are requested at boot.

For example booting with 3844 1G pages requires prepping
(set_compound_head, init the count) over 1 billion 4K tail pages, which
takes considerable time.  This should also apply to reserving the same
amount of memory as 2M pages, as the same number of struct pages are
affected in either case.

Add a cond_resched() to the outer loop in gather_bootmem_prealloc() to
prevent this lockup.

Tested: Booted with softlockup_panic=1 hugepagesz=1G hugepages=3844 and no
softlockup is reported, and the hugepages are reported as successfully
setup.

Link: http://lkml.kernel.org/r/20180627214447.260804-1-cannonmatthews@xxxxxxxxxx
Signed-off-by: Cannon Matthews <cannonmatthews@xxxxxxxxxx>
Reviewed-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
Cc: Mike Kravetz <mike.kravetz@xxxxxxxxxx>
Cc: Andres Lagar-Cavilla <andreslc@xxxxxxxxxx>
Cc: Peter Feiner <pfeiner@xxxxxxxxxx>
Cc: Greg Thelen <gthelen@xxxxxxxxxx>
Cc: <stable@xxxxxxxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---


diff -puN mm/hugetlb.c~mm-hugetlb-yield-when-prepping-struct-pages mm/hugetlb.c
--- a/mm/hugetlb.c~mm-hugetlb-yield-when-prepping-struct-pages
+++ a/mm/hugetlb.c
@@ -2163,6 +2163,7 @@ static void __init gather_bootmem_preall
 		 */
 		if (hstate_is_gigantic(h))
 			adjust_managed_page_count(page, 1 << h->order);
+		cond_resched();
 	}
 }
 
_

Patches currently in -mm which might be from cannonmatthews@xxxxxxxxxx are

mm-hugetlb-yield-when-prepping-struct-pages.patch




[Index of Archives]     [Linux Kernel]     [Kernel Development Newbies]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Hiking]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux