On Tue, Jul 24, 2018 at 9:18 PM Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> wrote: > > On Tue, 24 Jul 2018 19:55:20 -0400 Pavel Tatashin <pasha.tatashin@xxxxxxxxxx> wrote: > > > memmap_init_zone, is getting complex, because it is called from different > > contexts: hotplug, and during boot, and also because it must handle some > > architecture quirks. One of them is mirroed memory. > > > > Move the code that decides whether to skip mirrored memory outside of > > memmap_init_zone, into a separate function. > > Conflicts a bit with the page_alloc.c hunk from > http://ozlabs.org/~akpm/mmots/broken-out/mm-page_alloc-remain-memblock_next_valid_pfn-on-arm-arm64.patch. Please check my fixup: The merge looks good to me. Thank you. > > void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, > unsigned long start_pfn, enum memmap_context context, > struct vmem_altmap *altmap) > { > unsigned long pfn, end_pfn = start_pfn + size; > struct page *page; > > if (highest_memmap_pfn < end_pfn - 1) > highest_memmap_pfn = end_pfn - 1; > > /* > * Honor reservation requested by the driver for this ZONE_DEVICE > * memory > */ > if (altmap && start_pfn == altmap->base_pfn) > start_pfn += altmap->reserve; > > for (pfn = start_pfn; pfn < end_pfn; pfn++) { > /* > * There can be holes in boot-time mem_map[]s handed to this > * function. They do not exist on hotplugged memory. > */ > if (context == MEMMAP_EARLY) { > if (!early_pfn_valid(pfn)) { > pfn = next_valid_pfn(pfn) - 1; I wish we did not have to do next_valid_pfn(pfn) - 1, and instead could do something like: for (pfn = start_pfn; pfn < end_pfn; pfn = next_valid_pfn(pfn)) Of course the performance of next_valid_pfn() should be optimized on arm for the common case where next valid pfn is pfn++. Pavel