In commit 52fb87c81f11 ("mm/memory_hotplug: cleanup __remove_pages()"), we cleaned up __remove_pages(), and introduced a shorter variant to calculate the number of pages to the next section boundary. Turns out we can make this calculation easier to read. We always want to have the number of pages (> 0) to the next section boundary, starting from the current pfn. We'll clean up __remove_pages() in a follow-up patch and directly make use of this computation. Suggested-by: Segher Boessenkool <segher@xxxxxxxxxxxxxxxxxxx> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Cc: Oscar Salvador <osalvador@xxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxxxx> Cc: Baoquan He <bhe@xxxxxxxxxx> Cc: Dan Williams <dan.j.williams@xxxxxxxxx> Cc: Wei Yang <richardw.yang@xxxxxxxxxxxxxxx> Signed-off-by: David Hildenbrand <david@xxxxxxxxxx> --- mm/memory_hotplug.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 4a9b3f6c6b37..8fe7e32dad48 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -534,7 +534,8 @@ void __remove_pages(unsigned long pfn, unsigned long nr_pages, for (; pfn < end_pfn; pfn += cur_nr_pages) { cond_resched(); /* Select all remaining pages up to the next section boundary */ - cur_nr_pages = min(end_pfn - pfn, -(pfn | PAGE_SECTION_MASK)); + cur_nr_pages = min(end_pfn - pfn, + SECTION_ALIGN_UP(pfn + 1) - pfn); __remove_section(pfn, cur_nr_pages, map_offset, altmap); map_offset = 0; } -- 2.24.1