On Wed, May 13, 2015 at 01:55:56PM -0700, Andrew Morton wrote: > On Wed, 13 May 2015 01:44:22 +0000 Naoya Horiguchi <n-horiguchi@xxxxxxxxxxxxx> wrote: > > > > > order = huge_page_order(h); > > > > VM_BUG_ON(!IS_ALIGNED(start_pfn, 1 << order)); > > > > + VM_BUG_ON(order == UINT_MAX); > > > > for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << order) > > > > dissolve_free_huge_page(pfn_to_page(pfn)); > > > > > > Do we need to calculate this each time? Can it be done in > > > hugetlb_init_hstates(), save the result in a global? > > > > Yes, it should work. How about the following? > > This adds 4bytes to .data due to a new global variable, but reduces 47 bytes > > .text size of code reduces, so it's a win in total. > > > > text data bss dec hex filename > > 28313 469 84236 113018 1b97a mm/hugetlb.o (above patch) > > 28266 473 84236 112975 1b94f mm/hugetlb.o (below patch) > > Looks good. Please turn it into a real patch and send it over when > convenient? Yes, the patch is attached below. > > --- a/mm/hugetlb.c > > +++ b/mm/hugetlb.c > > @@ -40,6 +40,7 @@ int hugepages_treat_as_movable; > > int hugetlb_max_hstate __read_mostly; > > unsigned int default_hstate_idx; > > struct hstate hstates[HUGE_MAX_HSTATE]; > > +unsigned int minimum_order __read_mostly; > > static. > > And a comment would be nice ;) OK, fixed. > > > > > ... > > > > @@ -1626,11 +1621,16 @@ static void __init hugetlb_init_hstates(void) > > { > > struct hstate *h; > > > > + minimum_order = UINT_MAX; > > Do this at compile time. OK. > > for_each_hstate(h) { > > + if (minimum_order > huge_page_order(h)) > > + minimum_order = huge_page_order(h); > > + > > /* oversize hugepages were init'ed in early boot */ > > if (!hstate_is_gigantic(h)) > > hugetlb_hstate_alloc_pages(h); > > } > > + VM_BUG_ON(minimum_order == UINT_MAX); > > Is the system hopelessly screwed up when this happens, or will it still > be able to boot up and do useful things? > > If the system is hopelessly broken then BUG_ON or, better, panic should > be used here. But if there's still potential to do useful things then > I guess VM_BUG_ON is appropriate. When this happens, hugetlb subsystem is broken but the system can run as usual, so this is not critical. So I think VM_BUG_ON is OK. Thanks, Naoya Horiguchi --- From: Naoya Horiguchi <n-horiguchi@xxxxxxxxxxxxx> Subject: [PATCH] mm/hugetlb: introduce minimum hugepage order Currently the initial value of order in dissolve_free_huge_page is 64 or 32, which leads to the following warning in static checker: mm/hugetlb.c:1203 dissolve_free_huge_pages() warn: potential right shift more than type allows '9,18,64' This is a potential risk of infinite loop, because 1 << order (== 0) is used in for-loop like this: for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << order) ... So this patch fixes it by using global minimum_order calculated at boot time. text data bss dec hex filename 28313 469 84236 113018 1b97a mm/hugetlb.o 28256 473 84236 112965 1b945 mm/hugetlb.o (patched) Fixes: c8721bbbdd36 ("mm: memory-hotplug: enable memory hotplug to handle hugepage") Reported-by: Dan Carpenter <dan.carpenter@xxxxxxxxxx> Signed-off-by: Naoya Horiguchi <n-horiguchi@xxxxxxxxxxxxx> --- mm/hugetlb.c | 19 +++++++++++-------- 1 file changed, 11 insertions(+), 8 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 271e4432734c..8c4c1f9f9a9a 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -40,6 +40,11 @@ int hugepages_treat_as_movable; int hugetlb_max_hstate __read_mostly; unsigned int default_hstate_idx; struct hstate hstates[HUGE_MAX_HSTATE]; +/* + * Minimum page order among possible hugepage sizes, set to a proper value + * at boot time. + */ +static unsigned int minimum_order __read_mostly = UINT_MAX; __initdata LIST_HEAD(huge_boot_pages); @@ -1188,19 +1193,13 @@ static void dissolve_free_huge_page(struct page *page) */ void dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn) { - unsigned int order = 8 * sizeof(void *); unsigned long pfn; - struct hstate *h; if (!hugepages_supported()) return; - /* Set scan step to minimum hugepage size */ - for_each_hstate(h) - if (order > huge_page_order(h)) - order = huge_page_order(h); - VM_BUG_ON(!IS_ALIGNED(start_pfn, 1 << order)); - for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << order) + VM_BUG_ON(!IS_ALIGNED(start_pfn, 1 << minimum_order)); + for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << minimum_order) dissolve_free_huge_page(pfn_to_page(pfn)); } @@ -1627,10 +1626,14 @@ static void __init hugetlb_init_hstates(void) struct hstate *h; for_each_hstate(h) { + if (minimum_order > huge_page_order(h)) + minimum_order = huge_page_order(h); + /* oversize hugepages were init'ed in early boot */ if (!hstate_is_gigantic(h)) hugetlb_hstate_alloc_pages(h); } + VM_BUG_ON(minimum_order == UINT_MAX); } static char * __init memfmt(char *buf, unsigned long n) -- 2.1.0 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href