The patch titled Subject: mm: calculate deferred pages after skipping mirrored memory has been added to the -mm tree. Its filename is mm-calculate-deferred-pages-after-skipping-mirrored-memory.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mm-calculate-deferred-pages-after-skipping-mirrored-memory.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/mm-calculate-deferred-pages-after-skipping-mirrored-memory.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Pavel Tatashin <pasha.tatashin@xxxxxxxxxx> Subject: mm: calculate deferred pages after skipping mirrored memory update_defer_init() should be called only when struct page is about to be initialized. Because it counts number of initialized struct pages, but there we may skip struct pages if there is some mirrored memory. So move, update_defer_init() after checking for mirrored memory. Also, rename update_defer_init() to defer_init() and reverse the return boolean to emphasize that this is a boolean function, that tells that the reset of memmap initialization should be deferred. Make this function self-contained: do not pass number of already initialized pages in this zone by using static counters. Link: http://lkml.kernel.org/r/20180724235520.10200-3-pasha.tatashin@xxxxxxxxxx Signed-off-by: Pavel Tatashin <pasha.tatashin@xxxxxxxxxx> Cc: Abdul Haleem <abdhalee@xxxxxxxxxxxxxxxxxx> Cc: Baoquan He <bhe@xxxxxxxxxx> Cc: Daniel Jordan <daniel.m.jordan@xxxxxxxxxx> Cc: Dan Williams <dan.j.williams@xxxxxxxxx> Cc: Dave Hansen <dave.hansen@xxxxxxxxx> Cc: David Rientjes <rientjes@xxxxxxxxxx> Cc: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> Cc: Ingo Molnar <mingo@xxxxxxxxxx> Cc: Jan Kara <jack@xxxxxxx> Cc: Jérôme Glisse <jglisse@xxxxxxxxxx> Cc: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> Cc: Michael Ellerman <mpe@xxxxxxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxx> Cc: Souptick Joarder <jrdr.linux@xxxxxxxxx> Cc: Steven Sistare <steven.sistare@xxxxxxxxxx> Cc: Vlastimil Babka <vbabka@xxxxxxx> Cc: Wei Yang <richard.weiyang@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- diff -puN mm/page_alloc.c~mm-calculate-deferred-pages-after-skipping-mirrored-memory mm/page_alloc.c --- a/mm/page_alloc.c~mm-calculate-deferred-pages-after-skipping-mirrored-memory +++ a/mm/page_alloc.c @@ -306,24 +306,28 @@ static inline bool __meminit early_page_ } /* - * Returns false when the remaining initialisation should be deferred until + * Returns true when the remaining initialisation should be deferred until * later in the boot cycle when it can be parallelised. */ -static inline bool update_defer_init(pg_data_t *pgdat, - unsigned long pfn, unsigned long zone_end, - unsigned long *nr_initialised) +static inline bool defer_init(int nid, unsigned long pfn, unsigned long end_pfn) { + static unsigned long prev_end_pfn, nr_initialised; + + if (prev_end_pfn != end_pfn) { + prev_end_pfn = end_pfn; + nr_initialised = 0; + } + /* Always populate low zones for address-constrained allocations */ - if (zone_end < pgdat_end_pfn(pgdat)) - return true; - (*nr_initialised)++; - if ((*nr_initialised > pgdat->static_init_pgcnt) && - (pfn & (PAGES_PER_SECTION - 1)) == 0) { - pgdat->first_deferred_pfn = pfn; + if (end_pfn < pgdat_end_pfn(NODE_DATA(nid))) return false; + nr_initialised++; + if ((nr_initialised > NODE_DATA(nid)->static_init_pgcnt) && + (pfn & (PAGES_PER_SECTION - 1)) == 0) { + NODE_DATA(nid)->first_deferred_pfn = pfn; + return true; } - - return true; + return false; } #else static inline bool early_page_uninitialised(unsigned long pfn) @@ -331,11 +335,9 @@ static inline bool early_page_uninitiali return false; } -static inline bool update_defer_init(pg_data_t *pgdat, - unsigned long pfn, unsigned long zone_end, - unsigned long *nr_initialised) +static inline bool defer_init(int nid, unsigned long pfn, unsigned long end_pfn) { - return true; + return false; } #endif @@ -5459,9 +5461,7 @@ void __meminit memmap_init_zone(unsigned struct vmem_altmap *altmap) { unsigned long end_pfn = start_pfn + size; - pg_data_t *pgdat = NODE_DATA(nid); unsigned long pfn; - unsigned long nr_initialised = 0; struct page *page; #ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP struct memblock_region *r = NULL, *tmp; @@ -5492,8 +5492,6 @@ void __meminit memmap_init_zone(unsigned if (!early_pfn_in_nid(pfn, nid)) continue; - if (!update_defer_init(pgdat, pfn, end_pfn, &nr_initialised)) - break; #ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP /* @@ -5516,6 +5514,8 @@ void __meminit memmap_init_zone(unsigned } } #endif + if (defer_init(nid, pfn, end_pfn)) + break; not_early: page = pfn_to_page(pfn); _ Patches currently in -mm which might be from pasha.tatashin@xxxxxxxxxx are mm-skip-invalid-pages-block-at-a-time-in-zero_resv_unresv.patch mm-sparse-abstract-sparse-buffer-allocations.patch mm-sparse-use-the-new-sparse-buffer-functions-in-non-vmemmap.patch mm-sparse-move-buffer-init-fini-to-the-common-place.patch mm-sparse-add-new-sparse_init_nid-and-sparse_init.patch mm-sparse-delete-old-sprase_init-and-enable-new-one.patch mm-sparse-delete-old-sparse_init-and-enable-new-one-v6.patch mm-make-memmap_init-a-proper-function.patch mm-calculate-deferred-pages-after-skipping-mirrored-memory.patch mm-move-mirrored-memory-specific-code-outside-of-memmap_init_zone.patch -- To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html