On Fri 29-06-18 10:29:17, Jia He wrote: > Commit b92df1de5d28 ("mm: page_alloc: skip over regions of invalid pfns > where possible") tried to optimize the loop in memmap_init_zone(). But > there is still some room for improvement. It would be great to shortly describe those optimization from high level POV. > > Patch 1 introduce new config to make codes more generic > Patch 2 remain the memblock_next_valid_pfn on arm and arm64 > Patch 3 optimizes the memblock_next_valid_pfn() > Patch 4~6 optimizes the early_pfn_valid() > > As for the performance improvement, after this set, I can see the time > overhead of memmap_init() is reduced from 27956us to 13537us in my > armv8a server(QDF2400 with 96G memory, pagesize 64k). So this is 13ms saving when booting 96G machine. Is this really worth the additional code? Are there any other benefits? [...] > arch/arm/Kconfig | 4 +++ > arch/arm/mm/init.c | 1 + > arch/arm64/Kconfig | 4 +++ > arch/arm64/mm/init.c | 1 + > include/linux/early_pfn.h | 79 +++++++++++++++++++++++++++++++++++++++++++++++ > include/linux/memblock.h | 2 ++ > include/linux/mmzone.h | 18 ++++++++++- > mm/Kconfig | 3 ++ > mm/memblock.c | 9 ++++++ > mm/page_alloc.c | 5 ++- > 10 files changed, 124 insertions(+), 2 deletions(-) > create mode 100644 include/linux/early_pfn.h -- Michal Hocko SUSE Labs