The patch titled Subject: mm: use aligned address in copy_user_gigantic_page() has been added to the -mm mm-unstable branch. Its filename is mm-use-aligned-address-in-copy_user_gigantic_page.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-use-aligned-address-in-copy_user_gigantic_page.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> Subject: mm: use aligned address in copy_user_gigantic_page() Date: Sat, 26 Oct 2024 13:43:07 +0800 When copying gigantic page, it copies page from the first page to the last page, if directly passing addr_hint which maybe not the address of the first page of folio, then some archs could flush the wrong cache if it does use the addr_hint as a hint. For non-gigantic page, it calculates the base address inside, even passed the wrong addr_hint, it only has performance impact as the process_huge_page() wants to process target page last to keep its cache lines hot), no functional impact. Let's pass the real accessed address to copy_user_large_folio() and use the aligned address in copy_user_gigantic_page() to fix it. Link: https://lkml.kernel.org/r/20241026054307.3896926-2-wangkefeng.wang@xxxxxxxxxx Fixes: 530dd9926dc1 ("mm: memory: improve copy_user_large_folio()") Signed-off-by: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Cc: Huang Ying <ying.huang@xxxxxxxxx> Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Cc: Muchun Song <muchun.song@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/hugetlb.c | 5 ++--- mm/memory.c | 1 + 2 files changed, 3 insertions(+), 3 deletions(-) --- a/mm/hugetlb.c~mm-use-aligned-address-in-copy_user_gigantic_page +++ a/mm/hugetlb.c @@ -5338,7 +5338,7 @@ again: break; } ret = copy_user_large_folio(new_folio, pte_folio, - ALIGN_DOWN(addr, sz), dst_vma); + addr, dst_vma); folio_put(pte_folio); if (ret) { folio_put(new_folio); @@ -6641,8 +6641,7 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_ *foliop = NULL; goto out; } - ret = copy_user_large_folio(folio, *foliop, - ALIGN_DOWN(dst_addr, size), dst_vma); + ret = copy_user_large_folio(folio, *foliop, dst_addr, dst_vma); folio_put(*foliop); *foliop = NULL; if (ret) { --- a/mm/memory.c~mm-use-aligned-address-in-copy_user_gigantic_page +++ a/mm/memory.c @@ -6849,6 +6849,7 @@ static int copy_user_gigantic_page(struc struct page *dst_page; struct page *src_page; + addr = ALIGN_DOWN(addr, folio_size(dst)); for (i = 0; i < nr_pages; i++) { dst_page = folio_page(dst, i); src_page = folio_page(src, i); _ Patches currently in -mm which might be from wangkefeng.wang@xxxxxxxxxx are mm-remove-unused-hugepage-for-vma_alloc_folio.patch tmpfs-dont-enable-large-folios-if-not-supported.patch mm-huge_memory-move-file_thp_enabled-into-huge_memoryc.patch mm-shmem-remove-__shmem_huge_global_enabled.patch mm-use-aligned-address-in-clear_gigantic_page.patch mm-use-aligned-address-in-copy_user_gigantic_page.patch