The quilt patch titled Subject: mm: shmem: avoid allocating huge pages larger than MAX_PAGECACHE_ORDER for shmem has been removed from the -mm tree. Its filename was mm-shmem-avoid-allocating-huge-pages-larger-than-max_pagecache_order-for-shmem.patch This patch was dropped because it was merged into the mm-hotfixes-stable branch of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm ------------------------------------------------------ From: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> Subject: mm: shmem: avoid allocating huge pages larger than MAX_PAGECACHE_ORDER for shmem Date: Wed, 31 Jul 2024 13:46:19 +0800 Similar to commit d659b715e94ac ("mm/huge_memory: avoid PMD-size page cache if needed"), ARM64 can support 512MB PMD-sized THP when the base page size is 64KB, which is larger than the maximum supported page cache size MAX_PAGECACHE_ORDER. This is not expected. To fix this issue, use THP_ORDERS_ALL_FILE_DEFAULT for shmem to filter allowable huge orders. [baolin.wang@xxxxxxxxxxxxxxxxx: remove comment, per Barry] Link: https://lkml.kernel.org/r/c55d7ef7-78aa-4ed6-b897-c3e03a3f3ab7@xxxxxxxxxxxxxxxxx [wangkefeng.wang@xxxxxxxxxx: remove local `orders'] Link: https://lkml.kernel.org/r/87769ae8-b6c6-4454-925d-1864364af9c8@xxxxxxxxxx Link: https://lkml.kernel.org/r/117121665254442c3c7f585248296495e5e2b45c.1722404078.git.baolin.wang@xxxxxxxxxxxxxxxxx Fixes: e7a2ab7b3bb5 ("mm: shmem: add mTHP support for anonymous shmem") Signed-off-by: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> Signed-off-by: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> Reviewed-by: Barry Song <baohua@xxxxxxxxxx> Cc: Barry Song <21cnbao@xxxxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Cc: Gavin Shan <gshan@xxxxxxxxxx> Cc: Hugh Dickins <hughd@xxxxxxxxxx> Cc: Lance Yang <ioworker0@xxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Ryan Roberts <ryan.roberts@xxxxxxx> Cc: Zi Yan <ziy@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/shmem.c | 7 +------ 1 file changed, 1 insertion(+), 6 deletions(-) --- a/mm/shmem.c~mm-shmem-avoid-allocating-huge-pages-larger-than-max_pagecache_order-for-shmem +++ a/mm/shmem.c @@ -1629,11 +1629,6 @@ unsigned long shmem_allowable_huge_order unsigned long mask = READ_ONCE(huge_shmem_orders_always); unsigned long within_size_orders = READ_ONCE(huge_shmem_orders_within_size); unsigned long vm_flags = vma->vm_flags; - /* - * Check all the (large) orders below HPAGE_PMD_ORDER + 1 that - * are enabled for this vma. - */ - unsigned long orders = BIT(PMD_ORDER + 1) - 1; loff_t i_size; int order; @@ -1678,7 +1673,7 @@ unsigned long shmem_allowable_huge_order if (global_huge) mask |= READ_ONCE(huge_shmem_orders_inherit); - return orders & mask; + return THP_ORDERS_ALL_FILE_DEFAULT & mask; } static unsigned long shmem_suitable_orders(struct inode *inode, struct vm_fault *vmf, _ Patches currently in -mm which might be from baolin.wang@xxxxxxxxxxxxxxxxx are mm-shmem-simplify-the-suitable-huge-orders-validation-for-tmpfs.patch mm-shmem-rename-shmem_is_huge-to-shmem_huge_global_enabled.patch mm-shmem-move-shmem_huge_global_enabled-into-shmem_allowable_huge_orders.patch mm-vmscan-add-validation-before-spliting-shmem-large-folio.patch mm-swap-extend-swap_shmem_alloc-to-support-batch-swap_map_shmem-flag-setting.patch mm-shmem-extend-shmem_partial_swap_usage-to-support-large-folio-swap.patch mm-filemap-use-xa_get_order-to-get-the-swap-entry-order.patch mm-shmem-use-swap_free_nr-to-free-shmem-swap-entries.patch mm-shmem-support-large-folio-allocation-for-shmem_replace_folio.patch mm-shmem-drop-folio-reference-count-using-nr_pages-in-shmem_delete_from_page_cache.patch mm-shmem-split-large-entry-if-the-swapin-folio-is-not-large.patch mm-shmem-support-large-folio-swap-out.patch