Re: [PATCH 1/2] mm: shmem: fix incorrect index alignment for within_size policy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 2024/12/19 23:35, David Hildenbrand wrote:
On 19.12.24 08:30, Baolin Wang wrote:
With enabling the shmem per-size within_size policy, using an incorrect
'order' size to round_up() the index can lead to incorrect i_size checks,
resulting in an inappropriate large orders being returned.

Changing to use '1 << order' to round_up() the index to fix this issue.
Additionally, adding an 'aligned_index' variable to avoid affecting the
index checks.

Fixes: e7a2ab7b3bb5 ("mm: shmem: add mTHP support for anonymous shmem")
Signed-off-by: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx>
---
Hi Andrew,

These two bugfix patches are based on the mm-hotfixes-unstable branch,
and this patch has a slight conflict with my previous patch set:
"Support large folios for tmpfs". However, I think the conflicts are
easy to resolve. If you need me to rebase and resend the
"Support large folios for tmpfs" patch set, please let me know.
Sorry for the troubles :)
---
  mm/shmem.c | 5 +++--
  1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index f6fb053ac50d..dec659e84562 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1689,6 +1689,7 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode,
      unsigned long mask = READ_ONCE(huge_shmem_orders_always);
      unsigned long within_size_orders = READ_ONCE(huge_shmem_orders_within_size);
      unsigned long vm_flags = vma ? vma->vm_flags : 0;
+    pgoff_t aligned_index;
      bool global_huge;
      loff_t i_size;
      int order;
@@ -1723,9 +1724,9 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode,
      /* Allow mTHP that will be fully within i_size. */
      order = highest_order(within_size_orders);
      while (within_size_orders) {
-        index = round_up(index + 1, order);
+        aligned_index = round_up(index + 1, 1 << order);
          i_size = round_up(i_size_read(inode), PAGE_SIZE);
-        if (i_size >> PAGE_SHIFT >= index) {
+        if (i_size >> PAGE_SHIFT >= aligned_index) {
              mask |= within_size_orders;
              break;
          }


Yes, that matches the logic in shmem_huge_global_enabled().

Acked-by: David Hildenbrand <david@xxxxxxxxxx>


Was wondering if one can factor that out into a helper where one could pass an optional write_end ...

Yes, add it into my TODO list. Thanks for reviewing.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux