On 31.07.24 07:46, Baolin Wang wrote:
Similar to commit d659b715e94ac ("mm/huge_memory: avoid PMD-size page cache if needed"), ARM64 can support 512MB PMD-sized THP when the base page size is 64KB, which is larger than the maximum supported page cache size MAX_PAGECACHE_ORDER. This is not expected. To fix this issue, use THP_ORDERS_ALL_FILE_DEFAULT for shmem to filter allowable huge orders. Fixes: e7a2ab7b3bb5 ("mm: shmem: add mTHP support for anonymous shmem") Signed-off-by: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> --- mm/shmem.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 2faa9daaf54b..a4332a97558c 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1630,10 +1630,10 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode, unsigned long within_size_orders = READ_ONCE(huge_shmem_orders_within_size); unsigned long vm_flags = vma->vm_flags; /* - * Check all the (large) orders below HPAGE_PMD_ORDER + 1 that + * Check all the (large) orders below MAX_PAGECACHE_ORDER + 1 that * are enabled for this vma. */ - unsigned long orders = BIT(PMD_ORDER + 1) - 1; + unsigned long orders = THP_ORDERS_ALL_FILE_DEFAULT; loff_t i_size; int order;
Acked-by: David Hildenbrand <david@xxxxxxxxxx> -- Cheers, David / dhildenb