On Oct 14, 2022, at 9:19 PM, Jann Horn <jannh@xxxxxxxxxx> wrote: > Hi! > > I haven't actually managed to reproduce this behavior, so maybe I'm > just misunderstanding how this works; but I think the > arch_tlbbatch_flush() path for batched TLB flushing in vmscan ought to > have some kind of integration with mm_tlb_flush_nested(). > > I think that currently, the following race could happen: > > [initial situation: page P is mapped into a page table of task B, but > the page is not referenced, the PTE's A/D bits are clear] > A: vmscan begins > A: vmscan looks at P and P's PTEs, and concludes that P is not currently in use > B: reads from P through the PTE, setting the Accessed bit and creating > a TLB entry > A: vmscan enters try_to_unmap_one() > A: try_to_unmap_one() calls should_defer_flush(), which returns true > A: try_to_unmap_one() removes the PTE and queues a TLB flush > (arch_tlbbatch_add_mm()) > A: try_to_unmap_one() returns, try_to_unmap() returns to shrink_folio_list() > B: calls munmap() on the VMA that mapped P > B: no PTEs are removed, so no TLB flush happens Unless I am missing something, flush_tlb_batched_pending() is would be called and do the flushing at this point, no? IIUC the scenario, we had some similar cases in the past [1]. Discussing these scenarios required too many arguments for my liking, and I would’ve preferred an easier-to-reason batching coordination between the batching mechanisms. I proposed some schemes in the past, but to be fair, I think all of them would have some extra overhead. [1] https://lore.kernel.org/linux-mm/69BBEB97-1B10-4229-9AEF-DE19C26D8DFF@xxxxxxxxx/T/#u