The quilt patch titled Subject: mm/migrate: remove slab checks in isolate_movable_page() has been removed from the -mm tree. Its filename was mm-migrate-remove-slab-checks-in-isolate_movable_page.patch This patch was dropped because it was merged into the mm-stable branch of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm ------------------------------------------------------ From: Hyeonggon Yoo <42.hyeyoo@xxxxxxxxx> Subject: mm/migrate: remove slab checks in isolate_movable_page() Date: Tue, 10 Dec 2024 21:48:07 +0900 Commit 8b8817630ae8 ("mm/migrate: make isolate_movable_page() skip slab pages") introduced slab checks to prevent mis-identification of slab pages as movable kernel pages. However, after Matthew's frozen folio series, these slab checks became unnecessary as the migration logic fails to increase the reference count for frozen slab folios. Remove these redundant slab checks and associated memory barriers. Link: https://lkml.kernel.org/r/20241210124807.8584-1-42.hyeyoo@xxxxxxxxx Signed-off-by: Hyeonggon Yoo <42.hyeyoo@xxxxxxxxx> Acked-by: David Hildenbrand <david@xxxxxxxxxx> Reviewed-by: Vlastimil Babka <vbabka@xxxxxxx> Acked-by: David Rientjes <rientjes@xxxxxxxxxx> Cc: Christoph Lameter <cl@xxxxxxxxx> Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Pekka Enberg <penberg@xxxxxxxxxx> Cc: Roman Gushchin <roman.gushchin@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/migrate.c | 8 -------- mm/slub.c | 4 ---- 2 files changed, 12 deletions(-) --- a/mm/migrate.c~mm-migrate-remove-slab-checks-in-isolate_movable_page +++ a/mm/migrate.c @@ -68,10 +68,6 @@ bool isolate_movable_page(struct page *p if (!folio) goto out; - if (unlikely(folio_test_slab(folio))) - goto out_putfolio; - /* Pairs with smp_wmb() in slab freeing, e.g. SLUB's __free_slab() */ - smp_rmb(); /* * Check movable flag before taking the page lock because * we use non-atomic bitops on newly allocated page flags so @@ -79,10 +75,6 @@ bool isolate_movable_page(struct page *p */ if (unlikely(!__folio_test_movable(folio))) goto out_putfolio; - /* Pairs with smp_wmb() in slab allocation, e.g. SLUB's alloc_slab_page() */ - smp_rmb(); - if (unlikely(folio_test_slab(folio))) - goto out_putfolio; /* * As movable pages are not isolated from LRU lists, concurrent --- a/mm/slub.c~mm-migrate-remove-slab-checks-in-isolate_movable_page +++ a/mm/slub.c @@ -2429,8 +2429,6 @@ static inline struct slab *alloc_slab_pa slab = folio_slab(folio); __folio_set_slab(folio); - /* Make the flag visible before any changes to folio->mapping */ - smp_wmb(); if (folio_is_pfmemalloc(folio)) slab_set_pfmemalloc(slab); @@ -2651,8 +2649,6 @@ static void __free_slab(struct kmem_cach __slab_clear_pfmemalloc(slab); folio->mapping = NULL; - /* Make the mapping reset visible before clearing the flag */ - smp_wmb(); __folio_clear_slab(folio); mm_account_reclaimed_pages(pages); unaccount_slab(slab, order, s); _ Patches currently in -mm which might be from 42.hyeyoo@xxxxxxxxx are mm-zsmalloc-convert-__zs_map_object-__zs_unmap_object-to-use-zpdesc.patch mm-zsmalloc-add-and-use-pfn-zpdesc-seeking-funcs.patch mm-zsmalloc-convert-obj_malloc-to-use-zpdesc.patch mm-zsmalloc-convert-obj_allocated-and-related-helpers-to-use-zpdesc.patch mm-zsmalloc-convert-init_zspage-to-use-zpdesc.patch mm-zsmalloc-convert-obj_to_page-and-zs_free-to-use-zpdesc.patch mm-zsmalloc-add-two-helpers-for-zs_page_migrate-and-make-it-use-zpdesc.patch mm-zsmalloc-convert-__free_zspage-to-use-zpdesc.patch mm-zsmalloc-convert-location_to_obj-to-take-zpdesc.patch mm-zsmalloc-convert-migrate_zspage-to-use-zpdesc.patch mm-zsmalloc-convert-get_zspage-to-take-zpdesc.patch