The patch titled Subject: mm: move hot-plug specific memory init into separate functions and optimize has been added to the -mm tree. Its filename is mm-move-hot-plug-specific-memory-init-into-separate-functions-and-optimize.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mm-move-hot-plug-specific-memory-init-into-separate-functions-and-optimize.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/mm-move-hot-plug-specific-memory-init-into-separate-functions-and-optimize.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Alexander Duyck <alexander.h.duyck@xxxxxxxxxxxxxxx> Subject: mm: move hot-plug specific memory init into separate functions and optimize Combine the bits in memmap_init_zone and memmap_init_zone_device that are related to hotplug into a single function called __memmap_init_hotplug. Also take the opportunity to integrate __init_single_page's functionality into this function. In doing so we can get rid of some of the redundancy such as the LRU pointers versus the pgmap. Link: http://lkml.kernel.org/r/154361479366.7497.13916678539146224699.stgit@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Signed-off-by: Alexander Duyck <alexander.h.duyck@xxxxxxxxxxxxxxx> Reviewed-by: Pavel Tatashin <pasha.tatashin@xxxxxxxxxx> Cc: Dan Williams <dan.j.williams@xxxxxxxxx> Cc: Dave Jiang <dave.jiang@xxxxxxxxx> Cc: David S. Miller <davem@xxxxxxxxxxxxx> Cc: Ingo Molnar <mingo@xxxxxxxxxx> Cc: Khalid Aziz <khalid.aziz@xxxxxxxxxx> Cc: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> Cc: Laurent Dufour <ldufour@xxxxxxxxxxxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxx> Cc: Mike Rapoport <rppt@xxxxxxxxxxxxxxxxxx> Cc: Pavel Tatashin <pavel.tatashin@xxxxxxxxxxxxx> Cc: Vlastimil Babka <vbabka@xxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/page_alloc.c | 208 +++++++++++++++++++++++++++++----------------- 1 file changed, 135 insertions(+), 73 deletions(-) --- a/mm/page_alloc.c~mm-move-hot-plug-specific-memory-init-into-separate-functions-and-optimize +++ a/mm/page_alloc.c @@ -1181,8 +1181,9 @@ static void free_one_page(struct zone *z spin_unlock(&zone->lock); } -static void __meminit __init_single_page(struct page *page, unsigned long pfn, - unsigned long zone, int nid) +static void __meminit __init_struct_page_nolru(struct page *page, + unsigned long pfn, + unsigned long zone, int nid) { mm_zero_struct_page(page); set_page_links(page, zone, nid, pfn); @@ -1191,7 +1192,6 @@ static void __meminit __init_single_page page_cpupid_reset_last(page); page_kasan_tag_reset(page); - INIT_LIST_HEAD(&page->lru); #ifdef WANT_PAGE_VIRTUAL /* The shift won't overflow because ZONE_NORMAL is below 4G. */ if (!is_highmem_idx(zone)) @@ -1199,6 +1199,80 @@ static void __meminit __init_single_page #endif } +static void __meminit __init_single_page(struct page *page, unsigned long pfn, + unsigned long zone, int nid) +{ + __init_struct_page_nolru(page, pfn, zone, nid); + INIT_LIST_HEAD(&page->lru); +} + +static void __meminit __init_pageblock(unsigned long start_pfn, + unsigned long nr_pages, + unsigned long zone, int nid, + struct dev_pagemap *pgmap) +{ + unsigned long nr_pgmask = pageblock_nr_pages - 1; + struct page *start_page = pfn_to_page(start_pfn); + unsigned long pfn = start_pfn + nr_pages - 1; + struct page *page; + + /* + * Enforce the following requirements: + * size > 0 + * size < pageblock_nr_pages + * start_pfn -> pfn does not cross pageblock_nr_pages boundary + */ + VM_BUG_ON(((start_pfn ^ pfn) | (nr_pages - 1)) > nr_pgmask); + + /* + * Work from highest page to lowest, this way we will still be + * warm in the cache when we call set_pageblock_migratetype + * below. + * + * The loop is based around the page pointer as the main index + * instead of the pfn because pfn is not used inside the loop if + * the section number is not in page flags and WANT_PAGE_VIRTUAL + * is not defined. + */ + for (page = start_page + nr_pages; page-- != start_page; pfn--) { + __init_struct_page_nolru(page, pfn, zone, nid); + /* + * Mark page reserved as it will need to wait for onlining + * phase for it to be fully associated with a zone. + * + * We can use the non-atomic __set_bit operation for setting + * the flag as we are still initializing the pages. + */ + __SetPageReserved(page); + /* + * ZONE_DEVICE pages union ->lru with a ->pgmap back + * pointer and hmm_data. It is a bug if a ZONE_DEVICE + * page is ever freed or placed on a driver-private list. + */ + page->pgmap = pgmap; + if (!pgmap) + INIT_LIST_HEAD(&page->lru); + } + + /* + * Mark the block movable so that blocks are reserved for + * movable at startup. This will force kernel allocations + * to reserve their blocks rather than leaking throughout + * the address space during boot when many long-lived + * kernel allocations are made. + * + * bitmap is created for zone's valid pfn range. but memmap + * can be created for invalid pages (for alignment) + * check here not to call set_pageblock_migratetype() against + * pfn out of zone. + * + * Please note that MEMMAP_HOTPLUG path doesn't clear memmap + * because this is done early in sparse_add_one_section + */ + if (!(start_pfn & nr_pgmask)) + set_pageblock_migratetype(start_page, MIGRATE_MOVABLE); +} + #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT static void __meminit init_reserved_page(unsigned long pfn) { @@ -5697,6 +5771,25 @@ overlap_memmap_init(unsigned long zone, return false; } +static void __meminit __memmap_init_hotplug(unsigned long size, int nid, + unsigned long zone, + unsigned long start_pfn, + struct dev_pagemap *pgmap) +{ + unsigned long pfn = start_pfn + size; + + while (pfn != start_pfn) { + unsigned long stride = pfn; + + pfn = max(ALIGN_DOWN(pfn - 1, pageblock_nr_pages), start_pfn); + stride -= pfn; + + __init_pageblock(pfn, stride, zone, nid, pgmap); + + cond_resched(); + } +} + /* * Initially all pages are reserved - free ones are freed * up by memblock_free_all() once the early boot process is @@ -5707,49 +5800,59 @@ void __meminit memmap_init_zone(unsigned struct vmem_altmap *altmap) { unsigned long pfn, end_pfn = start_pfn + size; - struct page *page; if (highest_memmap_pfn < end_pfn - 1) highest_memmap_pfn = end_pfn - 1; + if (context == MEMMAP_HOTPLUG) { #ifdef CONFIG_ZONE_DEVICE - /* - * Honor reservation requested by the driver for this ZONE_DEVICE - * memory. We limit the total number of pages to initialize to just - * those that might contain the memory mapping. We will defer the - * ZONE_DEVICE page initialization until after we have released - * the hotplug lock. - */ - if (zone == ZONE_DEVICE) { - if (!altmap) - return; + /* + * Honor reservation requested by the driver for this + * ZONE_DEVICE memory. We limit the total number of pages to + * initialize to just those that might contain the memory + * mapping. We will defer the ZONE_DEVICE page initialization + * until after we have released the hotplug lock. + */ + if (zone == ZONE_DEVICE) { + if (!altmap) + return; + + if (start_pfn == altmap->base_pfn) + start_pfn += altmap->reserve; + end_pfn = altmap->base_pfn + + vmem_altmap_offset(altmap); + } +#endif + /* + * For these ZONE_DEVICE pages we don't need to record the + * pgmap as they should represent only those pages used to + * store the memory map. The actual ZONE_DEVICE pages will + * be initialized later. + */ + __memmap_init_hotplug(end_pfn - start_pfn, nid, zone, + start_pfn, NULL); - if (start_pfn == altmap->base_pfn) - start_pfn += altmap->reserve; - end_pfn = altmap->base_pfn + vmem_altmap_offset(altmap); + return; } -#endif for (pfn = start_pfn; pfn < end_pfn; pfn++) { + struct page *page; + /* * There can be holes in boot-time mem_map[]s handed to this * function. They do not exist on hotplugged memory. */ - if (context == MEMMAP_EARLY) { - if (!early_pfn_valid(pfn)) - continue; - if (!early_pfn_in_nid(pfn, nid)) - continue; - if (overlap_memmap_init(zone, &pfn)) - continue; - if (defer_init(nid, pfn, end_pfn)) - break; - } + if (!early_pfn_valid(pfn)) + continue; + if (!early_pfn_in_nid(pfn, nid)) + continue; + if (overlap_memmap_init(zone, &pfn)) + continue; + if (defer_init(nid, pfn, end_pfn)) + break; page = pfn_to_page(pfn); __init_single_page(page, pfn, zone, nid); - if (context == MEMMAP_HOTPLUG) - __SetPageReserved(page); /* * Mark the block movable so that blocks are reserved for @@ -5776,7 +5879,6 @@ void __ref memmap_init_zone_device(struc unsigned long size, struct dev_pagemap *pgmap) { - unsigned long pfn, end_pfn = start_pfn + size; struct pglist_data *pgdat = zone->zone_pgdat; unsigned long zone_idx = zone_idx(zone); unsigned long start = jiffies; @@ -5792,53 +5894,13 @@ void __ref memmap_init_zone_device(struc */ if (pgmap->altmap_valid) { struct vmem_altmap *altmap = &pgmap->altmap; + unsigned long end_pfn = start_pfn + size; start_pfn = altmap->base_pfn + vmem_altmap_offset(altmap); size = end_pfn - start_pfn; } - for (pfn = start_pfn; pfn < end_pfn; pfn++) { - struct page *page = pfn_to_page(pfn); - - __init_single_page(page, pfn, zone_idx, nid); - - /* - * Mark page reserved as it will need to wait for onlining - * phase for it to be fully associated with a zone. - * - * We can use the non-atomic __set_bit operation for setting - * the flag as we are still initializing the pages. - */ - __SetPageReserved(page); - - /* - * ZONE_DEVICE pages union ->lru with a ->pgmap back - * pointer and hmm_data. It is a bug if a ZONE_DEVICE - * page is ever freed or placed on a driver-private list. - */ - page->pgmap = pgmap; - page->hmm_data = 0; - - /* - * Mark the block movable so that blocks are reserved for - * movable at startup. This will force kernel allocations - * to reserve their blocks rather than leaking throughout - * the address space during boot when many long-lived - * kernel allocations are made. - * - * bitmap is created for zone's valid pfn range. but memmap - * can be created for invalid pages (for alignment) - * check here not to call set_pageblock_migratetype() against - * pfn out of zone. - * - * Please note that MEMMAP_HOTPLUG path doesn't clear memmap - * because this is done early in sparse_add_one_section - */ - if (!(pfn & (pageblock_nr_pages - 1))) { - set_pageblock_migratetype(page, MIGRATE_MOVABLE); - cond_resched(); - } - } + __memmap_init_hotplug(size, nid, zone_idx, start_pfn, pgmap); pr_info("%s initialised, %lu pages in %ums\n", dev_name(pgmap->dev), size, jiffies_to_msecs(jiffies - start)); _ Patches currently in -mm which might be from alexander.h.duyck@xxxxxxxxxxxxxxx are mm-use-mm_zero_struct_page-from-sparc-on-all-64b-architectures.patch mm-drop-meminit_pfn_in_nid-as-it-is-redundant.patch mm-implement-new-zone-specific-memblock-iterator.patch mm-initialize-max_order_nr_pages-at-a-time-instead-of-doing-larger-sections.patch mm-move-hot-plug-specific-memory-init-into-separate-functions-and-optimize.patch mm-add-reserved-flag-setting-to-set_page_links.patch mm-use-common-iterator-for-deferred_init_pages-and-deferred_free_pages.patch