Here is new version of "[PATCH v11 0/3] remain and optimize memblock_next_valid_pfn on arm and arm64" from Jia He, which is suggested by Ard to respin this patch set [1]. In the new version, I squashed patch 1/3 and patch 2/3 in v11 into one patch, fixed a bug for possible out of bound accessing the regions, and just introduce memblock_next_valid_pfn() for arm64 only as I don't have a arm32 platform to test. Ard asked to "with the new data points added for documentation, and crystal clear about how the meaning of PFN validity differs between ARM and other architectures, and why the assumptions that the optimization is based on are guaranteed to hold", to be honest, I didn't see PFN validity differs between ARM and x86 architecture, but there is a bug in commit b92df1de5d28 ("mm: page_alloc: skip over regions of invalid pfns where possible") which has a possible out of bound accessing the regions as well, so not sure that is the root cause. Testing on a HiSilicon ARM64 server (a 4 sockets system), I can get pretty much speedup for bootmem_init() at boot: with 384G memory, before: 13310ms after: 1415ms with 1T memory, before: 20s after: 2s [1]: https://lkml.org/lkml/2019/6/10/412 Jia He (2): mm: page_alloc: introduce memblock_next_valid_pfn() (again) for arm64 mm: page_alloc: reduce unnecessary binary search in memblock_next_valid_pfn arch/arm64/Kconfig | 1 + include/linux/mmzone.h | 9 +++++++ mm/Kconfig | 3 +++ mm/memblock.c | 56 ++++++++++++++++++++++++++++++++++++++++++ mm/page_alloc.c | 4 ++- 5 files changed, 72 insertions(+), 1 deletion(-) -- 2.19.1