On 12/03/22 12:22, gregkh@xxxxxxxxxxxxxxxxxxx wrote: > > The patch below does not apply to the 6.0-stable tree. > If someone wants it applied there, or to any other stable or longterm > tree, then please email the backport, including the original git commit > id to <stable@xxxxxxxxxxxxxxx>. Not exactly sure of the policy here where the commit title does not apply to the backport. I left the title 'as is' and added a note about differences. The commit message already anticipated a backport and mentions this as well. >From d0c568ab5c0b2c2c3aae8a3dd04a7c41cf1088c9 Mon Sep 17 00:00:00 2001 From: Mike Kravetz <mike.kravetz@xxxxxxxxxx> Date: Mon, 14 Nov 2022 15:55:06 -0800 Subject: [PATCH 2/2] hugetlb: don't delete vma_lock in hugetlb MADV_DONTNEED processing commit 04ada095dcfc4ae359418053c0be94453bdf1e84 upstream. madvise(MADV_DONTNEED) ends up calling zap_page_range() to clear page tables associated with the address range. For hugetlb vmas, zap_page_range will call __unmap_hugepage_range_final. However, __unmap_hugepage_range_final assumes the passed vma is about to be removed and deletes the vma_lock to prevent pmd sharing as the vma is on the way out. In the case of madvise(MADV_DONTNEED) the vma remains, but the missing vma_lock prevents pmd sharing and could potentially lead to issues with truncation/fault races. This issue was originally reported here [1] as a BUG triggered in page_try_dup_anon_rmap. Prior to the introduction of the hugetlb vma_lock, __unmap_hugepage_range_final cleared the VM_MAYSHARE flag to prevent pmd sharing. Subsequent faults on this vma were confused as VM_MAYSHARE indicates a sharable vma, but was not set so page_mapping was not set in new pages added to the page table. This resulted in pages that appeared anonymous in a VM_SHARED vma and triggered the BUG. Address issue by adding a new zap flag ZAP_FLAG_UNMAP to indicate an unmap call from unmap_vmas(). This is used to indicate the 'final' unmapping of a hugetlb vma. When called via MADV_DONTNEED, this flag is not set and the vm_lock is not deleted. NOTE - Prior to the introduction of the huegtlb vma_lock in v6.1, this issue is addressed by not clearing the VM_MAYSHARE flag when __unmap_hugepage_range_final is called in the MADV_DONTNEED case. [1] https://lore.kernel.org/lkml/CAO4mrfdLMXsao9RF4fUE8-Wfde8xmjsKrTNMNC9wjUb6JudD0g@xxxxxxxxxxxxxx/ Link: https://lkml.kernel.org/r/20221114235507.294320-3-mike.kravetz@xxxxxxxxxx Fixes: 90e7e7f5ef3f ("mm: enable MADV_DONTNEED for hugetlb mappings") Signed-off-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx> Reported-by: Wei Chen <harperchen1110@xxxxxxxxx> Cc: Axel Rasmussen <axelrasmussen@xxxxxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Mina Almasry <almasrymina@xxxxxxxxxx> Cc: Nadav Amit <nadav.amit@xxxxxxxxx> Cc: Naoya Horiguchi <naoya.horiguchi@xxxxxxxxx> Cc: Peter Xu <peterx@xxxxxxxxxx> Cc: Rik van Riel <riel@xxxxxxxxxxx> Cc: Vlastimil Babka <vbabka@xxxxxxx> Cc: <stable@xxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Signed-off-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx> --- include/linux/mm.h | 2 ++ mm/hugetlb.c | 25 ++++++++++++++----------- mm/memory.c | 2 +- 3 files changed, 17 insertions(+), 12 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 7368a99e4e55..fa4d973c1363 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1794,6 +1794,8 @@ struct zap_details { * default, the flag is not set. */ #define ZAP_FLAG_DROP_MARKER ((__force zap_flags_t) BIT(0)) +/* Set in unmap_vmas() to indicate a final unmap call. Only used by hugetlb */ +#define ZAP_FLAG_UNMAP ((__force zap_flags_t) BIT(1)) #ifdef CONFIG_MMU extern bool can_do_mlock(void); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index dbb558e71e9e..022a3bfafec4 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5145,17 +5145,20 @@ void __unmap_hugepage_range_final(struct mmu_gather *tlb, { __unmap_hugepage_range(tlb, vma, start, end, ref_page, zap_flags); - /* - * Clear this flag so that x86's huge_pmd_share page_table_shareable - * test will fail on a vma being torn down, and not grab a page table - * on its way out. We're lucky that the flag has such an appropriate - * name, and can in fact be safely cleared here. We could clear it - * before the __unmap_hugepage_range above, but all that's necessary - * is to clear it before releasing the i_mmap_rwsem. This works - * because in the context this is called, the VMA is about to be - * destroyed and the i_mmap_rwsem is held. - */ - vma->vm_flags &= ~VM_MAYSHARE; + if (zap_flags & ZAP_FLAG_UNMAP) { /* final unmap */ + /* + * Clear this flag so that x86's huge_pmd_share + * page_table_shareable test will fail on a vma being torn + * down, and not grab a page table on its way out. We're lucky + * that the flag has such an appropriate name, and can in fact + * be safely cleared here. We could clear it before the + * __unmap_hugepage_range above, but all that's necessary + * is to clear it before releasing the i_mmap_rwsem. This works + * because in the context this is called, the VMA is about to + * be destroyed and the i_mmap_rwsem is held. + */ + vma->vm_flags &= ~VM_MAYSHARE; + } } void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start, diff --git a/mm/memory.c b/mm/memory.c index 68d5b3dcec2e..a0fdaa74091f 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1712,7 +1712,7 @@ void unmap_vmas(struct mmu_gather *tlb, { struct mmu_notifier_range range; struct zap_details details = { - .zap_flags = ZAP_FLAG_DROP_MARKER, + .zap_flags = ZAP_FLAG_DROP_MARKER | ZAP_FLAG_UNMAP, /* Careful - we need to zap private pages too! */ .even_cows = true, }; -- 2.38.1