The patch titled Subject: mm: distinguish CMA and MOVABLE isolation in has_unmovable_pages() has been added to the -mm tree. Its filename is mm-distinguish-cma-and-movable-isolation-in-has_unmovable_pages.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mm-distinguish-cma-and-movable-isolation-in-has_unmovable_pages.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/mm-distinguish-cma-and-movable-isolation-in-has_unmovable_pages.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/SubmitChecklist when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Michal Hocko <mhocko@xxxxxxxx> Subject: mm: distinguish CMA and MOVABLE isolation in has_unmovable_pages() Joonsoo has noticed that "mm: drop migrate type checks from has_unmovable_pages" would break CMA allocator because it relies on has_unmovable_pages returning false even for CMA pageblocks which in fact don't have to be movable: alloc_contig_range start_isolate_page_range set_migratetype_isolate has_unmovable_pages This is a result of the code sharing between CMA and memory hotplug while each one has a different idea of what has_unmovable_pages should return. This is unfortunate but fixing it properly would require a lot of code duplication. Fix the issue by introducing the requested migrate type argument and special case MIGRATE_CMA case where CMA page blocks are handled properly. This will work for memory hotplug because it requires MIGRATE_MOVABLE. Link: http://lkml.kernel.org/r/20171019122118.y6cndierwl2vnguj@xxxxxxxxxxxxxx Signed-off-by: Michal Hocko <mhocko@xxxxxxxx> Reported-by: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> Cc: Michael Ellerman <mpe@xxxxxxxxxxxxxx> Cc: Vlastimil Babka <vbabka@xxxxxxx> Cc: Igor Mammedov <imammedo@xxxxxxxxxx> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> Cc: Reza Arbab <arbab@xxxxxxxxxxxxxxxxxx> Cc: Vitaly Kuznetsov <vkuznets@xxxxxxxxxx> Cc: Xishi Qiu <qiuxishi@xxxxxxxxxx> Cc: Yasuaki Ishimatsu <yasu.isimatu@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- include/linux/page-isolation.h | 2 +- mm/page_alloc.c | 12 +++++++++++- mm/page_isolation.c | 10 +++++----- 3 files changed, 17 insertions(+), 7 deletions(-) diff -puN include/linux/page-isolation.h~mm-distinguish-cma-and-movable-isolation-in-has_unmovable_pages include/linux/page-isolation.h --- a/include/linux/page-isolation.h~mm-distinguish-cma-and-movable-isolation-in-has_unmovable_pages +++ a/include/linux/page-isolation.h @@ -31,7 +31,7 @@ static inline bool is_migrate_isolate(in #endif bool has_unmovable_pages(struct zone *zone, struct page *page, int count, - bool skip_hwpoisoned_pages); + int migratetype, bool skip_hwpoisoned_pages); void set_pageblock_migratetype(struct page *page, int migratetype); int move_freepages_block(struct zone *zone, struct page *page, int migratetype, int *num_movable); diff -puN mm/page_alloc.c~mm-distinguish-cma-and-movable-isolation-in-has_unmovable_pages mm/page_alloc.c --- a/mm/page_alloc.c~mm-distinguish-cma-and-movable-isolation-in-has_unmovable_pages +++ a/mm/page_alloc.c @@ -7349,6 +7349,7 @@ void *__init alloc_large_system_hash(con * race condition. So you can't expect this function should be exact. */ bool has_unmovable_pages(struct zone *zone, struct page *page, int count, + int migratetype, bool skip_hwpoisoned_pages) { unsigned long pfn, iter, found; @@ -7360,6 +7361,15 @@ bool has_unmovable_pages(struct zone *zo if (zone_idx(zone) == ZONE_MOVABLE) return false; + /* + * CMA allocations (alloc_contig_range) really need to mark isolate + * CMA pageblocks even when they are not movable in fact so consider + * them movable here. + */ + if (is_migrate_cma(migratetype) && + is_migrate_cma(get_pageblock_migratetype(page))) + return false; + pfn = page_to_pfn(page); for (found = 0, iter = 0; iter < pageblock_nr_pages; iter++) { unsigned long check = pfn + iter; @@ -7442,7 +7452,7 @@ bool is_pageblock_removable_nolock(struc if (!zone_spans_pfn(zone, pfn)) return false; - return !has_unmovable_pages(zone, page, 0, true); + return !has_unmovable_pages(zone, page, 0, MIGRATE_MOVABLE, true); } #if (defined(CONFIG_MEMORY_ISOLATION) && defined(CONFIG_COMPACTION)) || defined(CONFIG_CMA) diff -puN mm/page_isolation.c~mm-distinguish-cma-and-movable-isolation-in-has_unmovable_pages mm/page_isolation.c --- a/mm/page_isolation.c~mm-distinguish-cma-and-movable-isolation-in-has_unmovable_pages +++ a/mm/page_isolation.c @@ -15,7 +15,7 @@ #define CREATE_TRACE_POINTS #include <trace/events/page_isolation.h> -static int set_migratetype_isolate(struct page *page, +static int set_migratetype_isolate(struct page *page, int migratetype, bool skip_hwpoisoned_pages) { struct zone *zone; @@ -52,7 +52,7 @@ static int set_migratetype_isolate(struc * FIXME: Now, memory hotplug doesn't call shrink_slab() by itself. * We just check MOVABLE pages. */ - if (!has_unmovable_pages(zone, page, arg.pages_found, + if (!has_unmovable_pages(zone, page, arg.pages_found, migratetype, skip_hwpoisoned_pages)) ret = 0; @@ -64,14 +64,14 @@ static int set_migratetype_isolate(struc out: if (!ret) { unsigned long nr_pages; - int migratetype = get_pageblock_migratetype(page); + int mt = get_pageblock_migratetype(page); set_pageblock_migratetype(page, MIGRATE_ISOLATE); zone->nr_isolate_pageblock++; nr_pages = move_freepages_block(zone, page, MIGRATE_ISOLATE, NULL); - __mod_zone_freepage_state(zone, -nr_pages, migratetype); + __mod_zone_freepage_state(zone, -nr_pages, mt); } spin_unlock_irqrestore(&zone->lock, flags); @@ -183,7 +183,7 @@ int start_isolate_page_range(unsigned lo pfn += pageblock_nr_pages) { page = __first_valid_page(pfn, pageblock_nr_pages); if (page && - set_migratetype_isolate(page, skip_hwpoisoned_pages)) { + set_migratetype_isolate(page, migratetype, skip_hwpoisoned_pages)) { undo_pfn = pfn; goto undo; } _ Patches currently in -mm which might be from mhocko@xxxxxxxx are mm-memory_hotplug-do-not-back-off-draining-pcp-free-pages-from-kworker-context.patch mm-drop-migrate-type-checks-from-has_unmovable_pages.patch mm-distinguish-cma-and-movable-isolation-in-has_unmovable_pages.patch mm-page_alloc-fail-has_unmovable_pages-when-seeing-reserved-pages.patch mm-memory_hotplug-do-not-fail-offlining-too-early.patch mm-memory_hotplug-remove-timeout-from-__offline_memory.patch mm-hugetlb-drop-hugepages_treat_as_movable-sysctl.patch mm-arch-remove-empty_bad_page.patch -- To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html