The patch below does not apply to the 4.19-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to <stable@xxxxxxxxxxxxxxx>. To reproduce the conflict and resubmit, you may use the following commands: git fetch https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/ linux-4.19.y git checkout FETCH_HEAD git cherry-pick -x 1640a0ef80f6d572725f5b0330038c18e98ea168 # <resolve conflicts, build, test, etc.> git commit -s git send-email --to '<stable@xxxxxxxxxxxxxxx>' --in-reply-to '2023112353-revival-badness-110c@gregkh' --subject-prefix 'PATCH 4.19.y' HEAD^.. Possible dependencies: 1640a0ef80f6 ("mm/memory_hotplug: use pfn math in place of direct struct page manipulation") aa218795cb5f ("mm: Allow to offline unmovable PageOffline() pages via MEM_GOING_OFFLINE") fe4c86c916d9 ("mm: remove "count" parameter from has_unmovable_pages()") 3f9903b9ca5e ("mm: remove the memory isolate notifier") 756d25be457f ("mm/page_isolation.c: convert SKIP_HWPOISON to MEMORY_OFFLINE") d8c6546b1aea ("mm: introduce compound_nr()") a50b854e073c ("mm: introduce page_size()") dd625285910d ("drivers/base/memory.c: get rid of find_memory_block_hinted()") ea8846411ad6 ("mm/memory_hotplug: move and simplify walk_memory_blocks()") fbcf73ce6582 ("mm/memory_hotplug: rename walk_memory_range() and pass start+size instead of pfns") 90ec010fe0d6 ("drivers/base/memory: use "unsigned long" for block ids") 2491f0a2c0b1 ("mm: section numbers use the type "unsigned long"") 4c4b7f9ba948 ("mm/memory_hotplug: remove memory block devices before arch_remove_memory()") db051a0dac13 ("mm/memory_hotplug: create memory block devices after arch_add_memory()") 80ec922dbd87 ("mm/memory_hotplug: allow arch_remove_memory() without CONFIG_MEMORY_HOTREMOVE") 1811582587c4 ("drivers/base/memory: pass a block_id to init_memory_block()") 22eb634632a2 ("arm64/mm: add temporary arch_remove_memory() implementation") eca499ab3749 ("mm/hotplug: make remove_memory() interface usable") 98879b3b9edc ("mm: vmscan: correct some vmscan counters for THP swapout") aa712399c1e8 ("mm/gup: speed up check_and_migrate_cma_pages() on huge page") thanks, greg k-h ------------------ original commit in Linus's tree ------------------ >From 1640a0ef80f6d572725f5b0330038c18e98ea168 Mon Sep 17 00:00:00 2001 From: Zi Yan <ziy@xxxxxxxxxx> Date: Wed, 13 Sep 2023 16:12:46 -0400 Subject: [PATCH] mm/memory_hotplug: use pfn math in place of direct struct page manipulation When dealing with hugetlb pages, manipulating struct page pointers directly can get to wrong struct page, since struct page is not guaranteed to be contiguous on SPARSEMEM without VMEMMAP. Use pfn calculation to handle it properly. Without the fix, a wrong number of page might be skipped. Since skip cannot be negative, scan_movable_page() will end early and might miss a movable page with -ENOENT. This might fail offline_pages(). No bug is reported. The fix comes from code inspection. Link: https://lkml.kernel.org/r/20230913201248.452081-4-zi.yan@xxxxxxxx Fixes: eeb0efd071d8 ("mm,memory_hotplug: fix scan_movable_pages() for gigantic hugepages") Signed-off-by: Zi Yan <ziy@xxxxxxxxxx> Reviewed-by: Muchun Song <songmuchun@xxxxxxxxxxxxx> Acked-by: David Hildenbrand <david@xxxxxxxxxx> Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Cc: Mike Kravetz <mike.kravetz@xxxxxxxxxx> Cc: Mike Rapoport (IBM) <rppt@xxxxxxxxxx> Cc: Thomas Bogendoerfer <tsbogend@xxxxxxxxxxxxxxxx> Cc: <stable@xxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 1b03f4ec6fd2..3b301c4023ff 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1689,7 +1689,7 @@ static int scan_movable_pages(unsigned long start, unsigned long end, */ if (HPageMigratable(head)) goto found; - skip = compound_nr(head) - (page - head); + skip = compound_nr(head) - (pfn - page_to_pfn(head)); pfn += skip - 1; } return -ENOENT;