On Wed, May 29, 2024 at 09:41:22AM -0700, Dave Hansen wrote: > On 5/28/24 22:00, Byungchul Park wrote: > > All the code updating ptes already performs TLB flush needed in a safe > > way if it's inevitable e.g. munmap. LUF which controls when to flush in > > a higer level than arch code, just leaves stale ro tlb entries that are > > currently supposed to be in use. Could you give a scenario that you are > > concering? > > Let's go back this scenario: > > fd = open("/some/file", O_RDONLY); > ptr1 = mmap(-1, size, PROT_READ, ..., fd, ...); > foo1 = *ptr1; > > There's a read-only PTE at 'ptr1'. Right? The page being pointed to is > eligible for LUF via the try_to_unmap() paths. In other words, the page > might be reclaimed at any time. If it is reclaimed, the PTE will be > cleared. > > Then, the user might do: > > munmap(ptr1, PAGE_SIZE); > > Which will _eventually_ wind up in the zap_pte_range() loop. But that > loop will only see pte_none(). It doesn't do _anything_ to the 'struct > mmu_gather'. > > The munmap() then lands in tlb_flush_mmu_tlbonly() where it looks at the > 'struct mmu_gather': > > if (!(tlb->freed_tables || tlb->cleared_ptes || > tlb->cleared_pmds || tlb->cleared_puds || > tlb->cleared_p4ds)) > return; > > But since there were no cleared PTEs (or anything else) during the > unmap, this just returns and doesn't flush the TLB. > > We now have an address space with a stale TLB entry at 'ptr1' and not > even a VMA there. There's nothing to stop a new VMA from going in, > installing a *new* PTE, but getting data from the stale TLB entry that > still hasn't been flushed. Thank you for the explanation. I got you. I think I could handle the case through a new flag in vma or something indicating LUF has deferred necessary TLB flush for it during unmapping so that mmu_gather mechanism can be aware of it. Of course, the performance change should be checked again. Thoughts? Thanks again. Byungchul