The patch titled Subject: mm/memory_hotplug: inline __offline_pages() into offline_pages() has been removed from the -mm tree. Its filename was mm-memory_hotplug-inline-__offline_pages-into-offline_pages.patch This patch was dropped because it was merged into mainline or a subsystem tree ------------------------------------------------------ From: David Hildenbrand <david@xxxxxxxxxx> Subject: mm/memory_hotplug: inline __offline_pages() into offline_pages() Patch series "mm/memory_hotplug: online_pages()/offline_pages() cleanups", v2. These are a bunch of cleanups for online_pages()/offline_pages() and related code, mostly getting rid of memory hole handling that is no longer necessary. There is only a single walk_system_ram_range() call left in offline_pages(), to make sure we don't have any memory holes. I had some of these patches lying around for a longer time but didn't have time to polish them. In addition, the last patch marks all pageblocks of memory to get onlined MIGRATE_ISOLATE, so pages that have just been exposed to the buddy cannot get allocated before onlining is complete. Once heavy lifting is done, the pageblocks are set to MIGRATE_MOVABLE, such that allocations are possible. I played with DIMMs and virtio-mem on x86-64 and didn't spot any surprises. I verified that the numer of isolated pageblocks is correctly handled when onlining/offlining. This patch (of 10): There is only a single user, offline_pages(). Let's inline, to make it look more similar to online_pages(). Link: https://lkml.kernel.org/r/20200819175957.28465-1-david@xxxxxxxxxx Link: https://lkml.kernel.org/r/20200819175957.28465-2-david@xxxxxxxxxx Signed-off-by: David Hildenbrand <david@xxxxxxxxxx> Acked-by: Michal Hocko <mhocko@xxxxxxxx> Reviewed-by: Oscar Salvador <osalvador@xxxxxxx> Reviewed-by: Pankaj Gupta <pankaj.gupta.linux@xxxxxxxxx> Cc: Wei Yang <richard.weiyang@xxxxxxxxxxxxxxxxx> Cc: Baoquan He <bhe@xxxxxxxxxx> Cc: Pankaj Gupta <pankaj.gupta.linux@xxxxxxxxx> Cc: Charan Teja Reddy <charante@xxxxxxxxxxxxxx> Cc: Dan Williams <dan.j.williams@xxxxxxxxx> Cc: Fenghua Yu <fenghua.yu@xxxxxxxxx> Cc: Logan Gunthorpe <logang@xxxxxxxxxxxx> Cc: "Matthew Wilcox (Oracle)" <willy@xxxxxxxxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxx> Cc: Michel Lespinasse <walken@xxxxxxxxxx> Cc: Mike Rapoport <rppt@xxxxxxxxxx> Cc: Tony Luck <tony.luck@xxxxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/memory_hotplug.c | 16 +++++----------- 1 file changed, 5 insertions(+), 11 deletions(-) --- a/mm/memory_hotplug.c~mm-memory_hotplug-inline-__offline_pages-into-offline_pages +++ a/mm/memory_hotplug.c @@ -1484,11 +1484,10 @@ static int count_system_ram_pages_cb(uns return 0; } -static int __ref __offline_pages(unsigned long start_pfn, - unsigned long end_pfn) +int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages) { - unsigned long pfn, nr_pages = 0; - unsigned long offlined_pages = 0; + const unsigned long end_pfn = start_pfn + nr_pages; + unsigned long pfn, system_ram_pages = 0, offlined_pages = 0; int ret, node, nr_isolate_pageblock; unsigned long flags; struct zone *zone; @@ -1505,9 +1504,9 @@ static int __ref __offline_pages(unsigne * memory holes PG_reserved, don't need pfn_valid() checks, and can * avoid using walk_system_ram_range() later. */ - walk_system_ram_range(start_pfn, end_pfn - start_pfn, &nr_pages, + walk_system_ram_range(start_pfn, nr_pages, &system_ram_pages, count_system_ram_pages_cb); - if (nr_pages != end_pfn - start_pfn) { + if (system_ram_pages != nr_pages) { ret = -EINVAL; reason = "memory holes"; goto failed_removal; @@ -1656,11 +1655,6 @@ failed_removal: return ret; } -int offline_pages(unsigned long start_pfn, unsigned long nr_pages) -{ - return __offline_pages(start_pfn, start_pfn + nr_pages); -} - static int check_memblock_offlined_cb(struct memory_block *mem, void *arg) { int ret = !is_memblock_offlined(mem); _ Patches currently in -mm which might be from david@xxxxxxxxxx are