On Tue, 21 Jan 2025, Roman Gushchin wrote: > Commit b67fbebd4cf9 ("mmu_gather: Force tlb-flush VM_PFNMAP vmas") > added a forced tlbflush to tlb_vma_end(), Yes, I think that was a poor way of fixing the bug in question. > which is required to avoid a > race between munmap() and unmap_mapping_range(). However it added some > overhead to other paths where tlb_vma_end() is used, but vmas are not > removed, e.g. madvise(MADV_DONTNEED). Right. > > Fix this by moving the tlb flush out of tlb_end_vma() into > free_pgtables(), somewhat similar to the stable version of the > original commit: e.g. stable commit 895428ee124a ("mm: Force TLB flush > for PFNMAP mappings before unlink_file_vma()"). Something like this patch will be a good improvement: but not this version of the patch. Because the mmu_gather may be gathering across many vmas, but tlb_start_vma(), well, its "tlb_update_vma_flags()", says tlb->vma_pfn = !!(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)); so a following vma may reset vma_pfn too soon: more care is needed. But probably vma_pfn should be reset to 0 somewhere, to avoid an extra TLB flush in free_pgtables() when it has already been done. Perhaps vma_pfn should follow the same pattern of initialization, setting and clearing as cleared_ptes etc, instead of following vma_huge and vma_exec. Perhaps, but it is something different, and I've not yet checked enough to be sure: tlb.h is still a maze too twisty for me. Hugh (after power outage interrupted reply) > > Note, that if tlb->fullmm is set, no flush is required, as the whole > mm is about to be destroyed. > > Suggested-by: Jann Horn <jannh@xxxxxxxxxx> > Signed-off-by: Roman Gushchin <roman.gushchin@xxxxxxxxx> > Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> > Cc: Will Deacon <will@xxxxxxxxxx> > Cc: "Aneesh Kumar K.V" <aneesh.kumar@xxxxxxxxxx> > Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> > Cc: Nick Piggin <npiggin@xxxxxxxxx> > Cc: Hugh Dickins <hughd@xxxxxxxxxx> > Cc: linux-arch@xxxxxxxxxxxxxxx > Cc: linux-mm@xxxxxxxxx > --- > include/asm-generic/tlb.h | 16 ++++------------ > mm/memory.c | 7 +++++++ > 2 files changed, 11 insertions(+), 12 deletions(-) > > diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h > index 709830274b75..411daa96f57a 100644 > --- a/include/asm-generic/tlb.h > +++ b/include/asm-generic/tlb.h > @@ -549,22 +549,14 @@ static inline void tlb_start_vma(struct mmu_gather *tlb, struct vm_area_struct * > > static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) > { > - if (tlb->fullmm) > + if (tlb->fullmm || IS_ENABLED(CONFIG_MMU_GATHER_MERGE_VMAS)) > return; > > /* > - * VM_PFNMAP is more fragile because the core mm will not track the > - * page mapcount -- there might not be page-frames for these PFNs after > - * all. Force flush TLBs for such ranges to avoid munmap() vs > - * unmap_mapping_range() races. > + * Do a TLB flush and reset the range at VMA boundaries; this avoids > + * the ranges growing with the unused space between consecutive VMAs. > */ > - if (tlb->vma_pfn || !IS_ENABLED(CONFIG_MMU_GATHER_MERGE_VMAS)) { > - /* > - * Do a TLB flush and reset the range at VMA boundaries; this avoids > - * the ranges growing with the unused space between consecutive VMAs. > - */ > - tlb_flush_mmu_tlbonly(tlb); > - } > + tlb_flush_mmu_tlbonly(tlb); > } > > /* > diff --git a/mm/memory.c b/mm/memory.c > index 398c031be9ba..2071415f68dd 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -365,6 +365,13 @@ void free_pgtables(struct mmu_gather *tlb, struct ma_state *mas, > { > struct unlink_vma_file_batch vb; > > + /* > + * Ensure we have no stale TLB entries by the time this mapping is > + * removed from the rmap. > + */ > + if (tlb->vma_pfn && !tlb->fullmm) > + tlb_flush_mmu(tlb); > + > do { > unsigned long addr = vma->vm_start; > struct vm_area_struct *next; > -- > 2.48.0.rc2.279.g1de40edade-goog