Re: [PATCH 1/2] mm: shmem: avoid allocating huge pages larger than MAX_PAGECACHE_ORDER for shmem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 2024/7/31 17:59, Kefeng Wang wrote:


On 2024/7/31 16:56, Baolin Wang wrote:


On 2024/7/31 14:18, Barry Song wrote:
On Wed, Jul 31, 2024 at 1:46 PM Baolin Wang
<baolin.wang@xxxxxxxxxxxxxxxxx> wrote:

Similar to commit d659b715e94ac ("mm/huge_memory: avoid PMD-size page cache if needed"), ARM64 can support 512MB PMD-sized THP when the base page size is 64KB, which is larger than the maximum supported page cache size MAX_PAGECACHE_ORDER. This is not expected. To fix this issue, use THP_ORDERS_ALL_FILE_DEFAULT for
shmem to filter allowable huge orders.

Fixes: e7a2ab7b3bb5 ("mm: shmem: add mTHP support for anonymous shmem")
Signed-off-by: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx>

Reviewed-by: Barry Song <baohua@xxxxxxxxxx>

Thanks for reviewing.


---
  mm/shmem.c | 4 ++--
  1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index 2faa9daaf54b..a4332a97558c 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1630,10 +1630,10 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode,          unsigned long within_size_orders = READ_ONCE(huge_shmem_orders_within_size);
         unsigned long vm_flags = vma->vm_flags;
         /*
-        * Check all the (large) orders below HPAGE_PMD_ORDER + 1 that
+        * Check all the (large) orders below MAX_PAGECACHE_ORDER + 1 that
          * are enabled for this vma.

Nit:
THP_ORDERS_ALL_FILE_DEFAULT should be self-explanatory enough.
I feel we don't need this comment?

Sure.

Andrew, please help to squash the following changes into this patch. Thanks.

Maybe drop unsigned long orders too?

diff --git a/mm/shmem.c b/mm/shmem.c
index 6af95f595d6f..8485eb6f2ec4 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1638,11 +1638,6 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode,
         unsigned long mask = READ_ONCE(huge_shmem_orders_always);
        unsigned long within_size_orders = READ_ONCE(huge_shmem_orders_within_size);
         unsigned long vm_flags = vma ? vma->vm_flags : 0;
-       /*
-        * Check all the (large) orders below HPAGE_PMD_ORDER + 1 that
-        * are enabled for this vma.
-        */
-       unsigned long orders = BIT(PMD_ORDER + 1) - 1;
         bool global_huge;
         loff_t i_size;
         int order;
@@ -1698,7 +1693,7 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode,
         if (global_huge)
                 mask |= READ_ONCE(huge_shmem_orders_inherit);

-       return orders & mask;
+       return THP_ORDERS_ALL_FILE_DEFAULT & mask;
  }

Yes. Good point. Thanks.
(Hope Andrew can help to squash these changes :))




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux