Re: [PATCH v2 1/2] mm: use aligned address in clear_gigantic_page()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 2024/10/28 21:14, David Hildenbrand wrote:
On 28.10.24 13:52, Kefeng Wang wrote:


On 2024/10/28 18:00, David Hildenbrand wrote:
On 26.10.24 07:43, Kefeng Wang wrote:
When clearing gigantic page, it zeros page from the first page to the
last page, if directly passing addr_hint which maybe not the address
of the first page of folio, then some archs could flush the wrong cache
if it does use the addr_hint as a hint. For non-gigantic page, it
calculates the base address inside, even passed the wrong addr_hint, it
only has performance impact as the process_huge_page() wants to process
target page last to keep its cache lines hot), no functional impact.

Let's pass the real accessed address to folio_zero_user() and use the
aligned address in clear_gigantic_page() to fix it.

Fixes: 78fefd04c123 ("mm: memory: convert clear_huge_page() to
folio_zero_user()")
Signed-off-by: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx>
---
v2:
- update changelog to clarify the impact, per Andrew

   fs/hugetlbfs/inode.c | 2 +-
   mm/memory.c          | 1 +
   2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index a4441fb77f7c..a5ea006f403e 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -825,7 +825,7 @@ static long hugetlbfs_fallocate(struct file *file,
int mode, loff_t offset,
               error = PTR_ERR(folio);
               goto out;
           }
-        folio_zero_user(folio, ALIGN_DOWN(addr, hpage_size));
+        folio_zero_user(folio, addr);
           __folio_mark_uptodate(folio);
           error = hugetlb_add_to_page_cache(folio, mapping, index);
           if (unlikely(error)) {
diff --git a/mm/memory.c b/mm/memory.c
index 75c2dfd04f72..ef47b7ea5ddd 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -6821,6 +6821,7 @@ static void clear_gigantic_page(struct folio
*folio, unsigned long addr,
       int i;
       might_sleep();
+    addr = ALIGN_DOWN(addr, folio_size(folio));

Right, that's what's effectively done in a very bad way in
process_huge_page()

unsigned long addr = addr_hint &
               ~(((unsigned long)nr_pages << PAGE_SHIFT) - 1);


That should all be cleaned up ... process_huge_page() likely shouldn't

Yes, let's fix the bug firstly,

be even consuming "nr_pages".

No sure about this part, it uses nr_pages as the end and calculate the
'base'.

It should be using folio_nr_pages().

But process_huge_page() without an explicit folio argument, I'd like to
move the aligned address calculate into the folio_zero_user and copy_user_large_folio(will rename it to folio_copy_user()) in the following cleanup patches, or do it in the fix patches?






[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux