The patch titled Subject: mm: use aligned address in clear_gigantic_page() has been added to the -mm mm-unstable branch. Its filename is mm-use-aligned-address-in-clear_gigantic_page.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-use-aligned-address-in-clear_gigantic_page.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> Subject: mm: use aligned address in clear_gigantic_page() Date: Mon, 28 Oct 2024 22:56:55 +0800 In current kernel, hugetlb_no_page() calls folio_zero_user() with the fault address. Where the fault address may be not aligned with the huge page size. Then, folio_zero_user() may call clear_gigantic_page() with the address, while clear_gigantic_page() requires the address to be huge page size aligned. So, this may cause memory corruption or information leak, addtional, use more obvious naming 'addr_hint' instead of 'addr' for clear_gigantic_page(). Link: https://lkml.kernel.org/r/20241028145656.932941-1-wangkefeng.wang@xxxxxxxxxx Fixes: 78fefd04c123 ("mm: memory: convert clear_huge_page() to folio_zero_user()") Signed-off-by: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Cc: Huang Ying <ying.huang@xxxxxxxxx> Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Cc: Muchun Song <muchun.song@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- fs/hugetlbfs/inode.c | 2 +- mm/memory.c | 3 ++- 2 files changed, 3 insertions(+), 2 deletions(-) --- a/fs/hugetlbfs/inode.c~mm-use-aligned-address-in-clear_gigantic_page +++ a/fs/hugetlbfs/inode.c @@ -819,7 +819,7 @@ static long hugetlbfs_fallocate(struct f error = PTR_ERR(folio); goto out; } - folio_zero_user(folio, ALIGN_DOWN(addr, hpage_size)); + folio_zero_user(folio, addr); __folio_mark_uptodate(folio); error = hugetlb_add_to_page_cache(folio, mapping, index); if (unlikely(error)) { --- a/mm/memory.c~mm-use-aligned-address-in-clear_gigantic_page +++ a/mm/memory.c @@ -6804,9 +6804,10 @@ static inline int process_huge_page( return 0; } -static void clear_gigantic_page(struct folio *folio, unsigned long addr, +static void clear_gigantic_page(struct folio *folio, unsigned long addr_hint, unsigned int nr_pages) { + unsigned long addr = ALIGN_DOWN(addr_hint, folio_size(folio)); int i; might_sleep(); _ Patches currently in -mm which might be from wangkefeng.wang@xxxxxxxxxx are mm-remove-unused-hugepage-for-vma_alloc_folio.patch tmpfs-dont-enable-large-folios-if-not-supported.patch mm-huge_memory-move-file_thp_enabled-into-huge_memoryc.patch mm-shmem-remove-__shmem_huge_global_enabled.patch mm-use-aligned-address-in-clear_gigantic_page.patch mm-use-aligned-address-in-copy_user_gigantic_page.patch