[PATCH] mm/x86/pat: Only untrack the pfn range if unmap region

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This patch is one patch of an old series [1] that got reposted standalone
here, with the hope to fix some reported untrack_pfn() issues reported
recently [2,3], where there used to be other fix [4] but unfortunately
which looks like to cause other issues.  The hope is this patch can fix it
the right way.

X86 uses pfn tracking to do pfnmaps.  AFAICT, the tracking should normally
start at mmap() of device drivers, then untracked when munmap().  However
in the current code the untrack is done in unmap_single_vma().  This might
be problematic.

For example, unmap_single_vma() can be used nowadays even for zapping a
single page rather than the whole vmas.  It's very confusing to do whole
vma untracking in this function even if a caller would like to zap one
page.  It could simply be wrong.

Such issue won't be exposed by things like MADV_DONTNEED won't ever work
for pfnmaps and it'll fail the madvise() already before reaching here.
However looks like it can be triggered like what was reported where invoked
from an unmap request from a file vma.

There's also work [5] on VFIO (merged now) to allow tearing down MMIO
pgtables before an munmap(), in which case we may not want to untrack the
pfns if we're only tearing down the pgtables.  IOW, we may want to keep the
pfn tracking information as those pfn mappings can be restored later with
the same vma object.  Currently it's not an immediate problem for VFIO, as
VFIO uses UC- by default, but it looks like there's plan to extend that in
the near future.

IIUC, this was overlooked when zap_page_range_single() was introduced,
while in the past it was only used in the munmap() path which wants to
always unmap the region completely.  E.g., commit f5cc4eef9987 ("VM: make
zap_page_range() callers that act on a single VMA use separate helper") is
the initial commit that introduced unmap_single_vma(), in which the chunk
of untrack_pfn() was moved over from unmap_vmas().

Recover that behavior to untrack pfnmap only when unmap regions.

[1] https://lore.kernel.org/r/20240523223745.395337-1-peterx@xxxxxxxxxx
[2] https://groups.google.com/g/syzkaller-bugs/c/FeQZvSbqWbQ/m/tHFmoZthAAAJ
[3] https://lore.kernel.org/r/20240712131931.20207-1-00107082@xxxxxxx
[4] https://lore.kernel.org/all/20240710-bug12-v1-1-0e5440f9b8d3@xxxxxxxxx/
[5] https://lore.kernel.org/r/20240523195629.218043-1-alex.williamson@xxxxxxxxxx

Cc: Alex Williamson <alex.williamson@xxxxxxxxxx>
Cc: Jason Gunthorpe <jgg@xxxxxxxxxx>
Cc: Al Viro <viro@xxxxxxxxxxxxxxxxxx>
Cc: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx>
Cc: Andy Lutomirski <luto@xxxxxxxxxx>
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Cc: Ingo Molnar <mingo@xxxxxxxxxx>
Cc: Borislav Petkov <bp@xxxxxxxxx>
Cc: Kirill A. Shutemov <kirill@xxxxxxxxxxxxx>
Cc: x86@xxxxxxxxxx
Cc: Yan Zhao <yan.y.zhao@xxxxxxxxx>
Cc: Kevin Tian <kevin.tian@xxxxxxxxx>
Cc: Pei Li <peili.dev@xxxxxxxxx>
Cc: David Hildenbrand <david@xxxxxxxxxx>
Cc: David Wang <00107082@xxxxxxx>
Cc: Bert Karwatzki <spasswolf@xxxxxx>
Cc: Sergey Senozhatsky <senozhatsky@xxxxxxxxxxxx>
Signed-off-by: Peter Xu <peterx@xxxxxxxxxx>
---

NOTE: I massaged the commit message comparing to the rfc post [1], the
patch itself is untouched.  Also removed rfc tag, and added more people
into the loop. Please kindly help test this patch if you have a reproducer,
as I can't reproduce it myself even with the syzbot reproducer on top of
mm-unstable.  Instead of further check on the reproducer, I decided to send
this out first as we have a bunch of reproducers on the list now..
---
 mm/memory.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index 4bcd79619574..f57cc304b318 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1827,9 +1827,6 @@ static void unmap_single_vma(struct mmu_gather *tlb,
 	if (vma->vm_file)
 		uprobe_munmap(vma, start, end);
 
-	if (unlikely(vma->vm_flags & VM_PFNMAP))
-		untrack_pfn(vma, 0, 0, mm_wr_locked);
-
 	if (start != end) {
 		if (unlikely(is_vm_hugetlb_page(vma))) {
 			/*
@@ -1894,6 +1891,8 @@ void unmap_vmas(struct mmu_gather *tlb, struct ma_state *mas,
 		unsigned long start = start_addr;
 		unsigned long end = end_addr;
 		hugetlb_zap_begin(vma, &start, &end);
+		if (unlikely(vma->vm_flags & VM_PFNMAP))
+			untrack_pfn(vma, 0, 0, mm_wr_locked);
 		unmap_single_vma(tlb, vma, start, end, &details,
 				 mm_wr_locked);
 		hugetlb_zap_end(vma, &details);
-- 
2.45.0





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux