On Fri, Jul 08, 2022 at 02:36:06PM +0100, Will Deacon wrote: > On Fri, Jul 08, 2022 at 09:18:06AM +0200, Peter Zijlstra wrote: > > @@ -507,16 +502,22 @@ static inline void tlb_start_vma(struct > > > > static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) > > { > > - if (tlb->fullmm || IS_ENABLED(CONFIG_MMU_GATHER_MERGE_VMAS)) > > + if (tlb->fullmm) > > return; > > > > /* > > - * Do a TLB flush and reset the range at VMA boundaries; this avoids > > - * the ranges growing with the unused space between consecutive VMAs, > > - * but also the mmu_gather::vma_* flags from tlb_start_vma() rely on > > - * this. > > + * VM_PFNMAP is more fragile because the core mm will not track the > > + * page mapcount -- there might not be page-frames for these PFNs after > > + * all. Force flush TLBs for such ranges to avoid munmap() vs > > + * unmap_mapping_range() races. > > */ > > - tlb_flush_mmu_tlbonly(tlb); > > + if (tlb->vma_pfn || !IS_ENABLED(CONFIG_MMU_GATHER_MERGE_VMAS)) { > > + /* > > + * Do a TLB flush and reset the range at VMA boundaries; this avoids > > + * the ranges growing with the unused space between consecutive VMAs. > > + */ > > + tlb_flush_mmu_tlbonly(tlb); > > + } > > We already have the vma here, so I'm not sure how much the new 'vma_pfn' > field really buys us over checking the 'vm_flags', but perhaps that's > cleanup for another day. Duh, that's just me being daft again. For some raisin I was convinced (and failed to check) that we only had the vma at start. I can easily respin this to not need the extra variable. How's this then? --- Subject: mmu_gather: Force tlb-flush VM_PFNMAP vmas From: Peter Zijlstra <peterz@xxxxxxxxxxxxx> Date: Thu Jul 7 11:51:16 CEST 2022 Jann reported a race between munmap() and unmap_mapping_range(), where unmap_mapping_range() will no-op once unmap_vmas() has unlinked the VMA; however munmap() will not yet have invalidated the TLBs. Therefore unmap_mapping_range() will complete while there are still (stale) TLB entries for the specified range. Mitigate this by force flushing TLBs for VM_PFNMAP ranges. Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx> --- include/asm-generic/tlb.h | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-) --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -507,16 +507,22 @@ static inline void tlb_start_vma(struct static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) { - if (tlb->fullmm || IS_ENABLED(CONFIG_MMU_GATHER_MERGE_VMAS)) + if (tlb->fullmm) return; /* - * Do a TLB flush and reset the range at VMA boundaries; this avoids - * the ranges growing with the unused space between consecutive VMAs, - * but also the mmu_gather::vma_* flags from tlb_start_vma() rely on - * this. + * VM_PFNMAP is more fragile because the core mm will not track the + * page mapcount -- there might not be page-frames for these PFNs after + * all. Force flush TLBs for such ranges to avoid munmap() vs + * unmap_mapping_range() races. */ - tlb_flush_mmu_tlbonly(tlb); + if ((vma->vm_flags & VM_PFNMAP) || !IS_ENABLED(CONFIG_MMU_GATHER_MERGE_VMAS)) { + /* + * Do a TLB flush and reset the range at VMA boundaries; this avoids + * the ranges growing with the unused space between consecutive VMAs. + */ + tlb_flush_mmu_tlbonly(tlb); + } } /*