The quilt patch titled Subject: mm: fix is_pinnable_page against on cma page has been removed from the -mm tree. Its filename was mm-fix-is_pinnable_page-against-on-cma-page.patch This patch was dropped because an alternative patch was merged ------------------------------------------------------ From: Minchan Kim <minchan@xxxxxxxxxx> Subject: mm: fix is_pinnable_page against on cma page Pages on CMA area could have MIGRATE_ISOLATE as well as MIGRATE_CMA so current is_pinnable_page could miss CMA pages which has MIGRATE_ ISOLATE. It ends up pinning CMA pages as longterm at pin_user_pages APIs so CMA allocation keep failed until the pin is released. CPU 0 CPU 1 - Task B cma_alloc alloc_contig_range pin_user_pages_fast(FOLL_LONGTERM) change pageblock as MIGRATE_ISOLATE internal_get_user_pages_fast lockless_pages_from_mm gup_pte_range try_grab_folio is_pinnable_page return true; So, pinned the page successfully. page migration failure with pinned page .. .. After 30 sec unpin_user_page(page) CMA allocation succeeded after 30 sec. The CMA allocation path protects the migration type change race using zone->lock but what GUP path need to know is just whether the page is on CMA area or not rather than exact migration type. Thus, we don't need zone->lock but just checks migration type in either of (MIGRATE_ISOLATE and MIGRATE_CMA). Adding the MIGRATE_ISOLATE check in is_pinnable_page could cause rejecting of pinning pages on MIGRATE_ISOLATE pageblocks even though it's neither CMA nor movable zone if the page is temporarily unmovable. However, such a migration failure by unexpected temporal refcount holding is general issue, not only come from MIGRATE_ISOLATE and the MIGRATE_ISOLATE is also transient state like other temporal elevated refcount problem. [akpm@xxxxxxxxxxxxxxxxxxxx: MIGRATE_CMA and MIGRATE_ISOLATE are not bitwise, per Minchan] [minchan@xxxxxxxxxx: restore __mt definition] [minchan@xxxxxxxxxx: changes per Paul and John] Link: https://lkml.kernel.org/r/20220512204143.3961150-1-minchan@xxxxxxxxxx Link: https://lkml.kernel.org/r/20220510211743.95831-1-minchan@xxxxxxxxxx Signed-off-by: Minchan Kim <minchan@xxxxxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Cc: "Paul E . McKenney" <paulmck@xxxxxxxxxx> Cc: John Hubbard <jhubbard@xxxxxxxxxx> Cc: John Dias <joaodias@xxxxxxxxxx> Cc: Stephen Rothwell <sfr@xxxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- include/linux/mm.h | 17 +++++++++++++++-- 1 file changed, 15 insertions(+), 2 deletions(-) --- a/include/linux/mm.h~mm-fix-is_pinnable_page-against-on-cma-page +++ a/include/linux/mm.h @@ -1625,8 +1625,21 @@ static inline bool page_needs_cow_for_dm #ifdef CONFIG_MIGRATION static inline bool is_pinnable_page(struct page *page) { - return !(is_zone_movable_page(page) || is_migrate_cma_page(page)) || - is_zero_pfn(page_to_pfn(page)); +#ifdef CONFIG_CMA + /* + * Defend against future compiler LTO features, or code refactoring + * that inlines the above function, by forcing a single read. Because, + * this routine races with set_pageblock_migratetype(), and we want to + * avoid reading zero, when actually one or the other flags was set. + */ + int __mt = get_pageblock_migratetype(page); + int mt = __READ_ONCE(__mt); + + if (mt == MIGRATE_CMA || mt == MIGRATE_ISOLATE) + return false; +#endif + + return !(is_zone_movable_page(page) || is_zero_pfn(page_to_pfn(page))); } #else static inline bool is_pinnable_page(struct page *page) _ Patches currently in -mm which might be from minchan@xxxxxxxxxx are mm-dont-be-stuck-to-rmap-lock-on-reclaim-path.patch