"Aneesh Kumar K.V" <aneesh.kumar@xxxxxxxxxxxxx> writes: > On 5/21/21 11:43 AM, Linus Torvalds wrote: >> On Thu, May 20, 2021 at 5:03 PM Aneesh Kumar K.V >> <aneesh.kumar@xxxxxxxxxxxxx> wrote: >>> >>> On 5/21/21 8:10 AM, Linus Torvalds wrote: >>>> >>>> So mremap does need to flush the TLB before releasing the page table >>>> lock, because that's the lifetime boundary for the page that got >>>> moved. >>> >>> How will we avoid that happening with >>> c49dd340180260c6239e453263a9a244da9a7c85 / >>> 2c91bd4a4e2e530582d6fd643ea7b86b27907151 . The commit improves mremap >>> performance by moving level3/level2 page table entries. When doing so we >>> are not holding level 4 ptl lock (pte_lock()). But rather we are holding >>> pmd_lock or pud_lock(). So if we move pages around without holding the >>> pte lock, won't the above issue happen even if we do a tlb flush with >>> holding pmd lock/pud lock? >> >> Hmm. Interesting. >> >> Your patch (to flush the TLB after clearing the old location, and >> before inserting it into the new one) looks like an "obvious" fix. >> >> But I'm putting that "obvious" in quotes, because I'm now wondering if >> it actually fixes anything. >> >> Lookie here: >> >> - CPU1 does a mremap of a pmd or pud. >> >> It clears the old pmd/pud, flushes the old TLB range, and then >> inserts the pmd/pud at the new location. >> >> - CPU2 does a page shrinker, which calls try_to_unmap, which calls >> try_to_unmap_one. >> >> These are entirely asynchronous, because they have no shared lock. The >> mremap uses the pmd lock, the try_to_unmap_one() does the rmap walk, >> which does the pte lock. >> >> Now, imagine that the following ordering happens with the two >> operations above, and a CPU3 that does accesses: >> >> - CPU2 follows (and sees) the old page tables in the old location and >> the took the pte lock >> >> - the mremap on CPU1 starts - cleared the old pmd, flushed the tlb, >> *and* inserts in the new place. >> >> - a user thread on CPU3 accesses the new location and fills the TLB >> of the *new* address >> >> - only now does CPU2 get to the "pte_get_and_clear()" to remove one page >> >> - CPU2 does a TLB flush and frees the page >> >> End result: >> >> - both CPU1 _and_ CPU2 have flushed the TLB. >> >> - but both flushed the *OLD* address >> >> - the page is freed >> >> - CPU3 still has the stale TLB entry pointing to the page that is now >> free and might be reused for something else >> >> Am I missing something? >> > > That is a problem. With that it looks like CONFIG_HAVE_MOVE_PMD/PUD is > broken? I don't see an easy way to fix this? We could do MOVE_PMD with something like below? A equivalent MOVE_PUD will be costlier which makes me wonder whether we should even support that? diff --git a/mm/mremap.c b/mm/mremap.c index 0270d6fed1dd..9e1e4392a1d9 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -233,7 +233,7 @@ static inline bool arch_supports_page_table_move(void) static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr, unsigned long new_addr, pmd_t *old_pmd, pmd_t *new_pmd) { - spinlock_t *old_ptl, *new_ptl; + spinlock_t *pte_ptl, *old_ptl, *new_ptl; struct mm_struct *mm = vma->vm_mm; pmd_t pmd; @@ -281,8 +281,17 @@ static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr, * flush the TLB before we move the page table entries. */ flush_pte_tlb_pwc_range(vma, old_addr, old_addr + PMD_SIZE); + + /* + * Take the ptl here so that we wait for parallel page table walk + * and operations (eg: pageout) using old addr to finish. + */ + pte_ptl = pte_lockptr(mm, old_pmd); + spin_lock(pte_ptl); + VM_BUG_ON(!pmd_none(*new_pmd)); pmd_populate(mm, new_pmd, pmd_pgtable(pmd)); + spin_unlock(pte_ptl); if (new_ptl != old_ptl) spin_unlock(new_ptl);