Patch "mm/khugepaged: take the right locks for page table retraction" has been added to the 5.10-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    mm/khugepaged: take the right locks for page table retraction

to the 5.10-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     mm-khugepaged-take-the-right-locks-for-page-table-re.patch
and it can be found in the queue-5.10 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit fb456f2c893540f9a10c07cf05d86bc67bea8359
Author: Jann Horn <jannh@xxxxxxxxxx>
Date:   Tue Dec 6 18:16:06 2022 +0100

    mm/khugepaged: take the right locks for page table retraction
    
    commit 8d3c106e19e8d251da31ff4cc7462e4565d65084 upstream.
    
    pagetable walks on address ranges mapped by VMAs can be done under the
    mmap lock, the lock of an anon_vma attached to the VMA, or the lock of the
    VMA's address_space.  Only one of these needs to be held, and it does not
    need to be held in exclusive mode.
    
    Under those circumstances, the rules for concurrent access to page table
    entries are:
    
     - Terminal page table entries (entries that don't point to another page
       table) can be arbitrarily changed under the page table lock, with the
       exception that they always need to be consistent for
       hardware page table walks and lockless_pages_from_mm().
       This includes that they can be changed into non-terminal entries.
     - Non-terminal page table entries (which point to another page table)
       can not be modified; readers are allowed to READ_ONCE() an entry, verify
       that it is non-terminal, and then assume that its value will stay as-is.
    
    Retracting a page table involves modifying a non-terminal entry, so
    page-table-level locks are insufficient to protect against concurrent page
    table traversal; it requires taking all the higher-level locks under which
    it is possible to start a page walk in the relevant range in exclusive
    mode.
    
    The collapse_huge_page() path for anonymous THP already follows this rule,
    but the shmem/file THP path was getting it wrong, making it possible for
    concurrent rmap-based operations to cause corruption.
    
    Link: https://lkml.kernel.org/r/20221129154730.2274278-1-jannh@xxxxxxxxxx
    Link: https://lkml.kernel.org/r/20221128180252.1684965-1-jannh@xxxxxxxxxx
    Link: https://lkml.kernel.org/r/20221125213714.4115729-1-jannh@xxxxxxxxxx
    Fixes: 27e1f8273113 ("khugepaged: enable collapse pmd for pte-mapped THP")
    Signed-off-by: Jann Horn <jannh@xxxxxxxxxx>
    Reviewed-by: Yang Shi <shy828301@xxxxxxxxx>
    Acked-by: David Hildenbrand <david@xxxxxxxxxx>
    Cc: John Hubbard <jhubbard@xxxxxxxxxx>
    Cc: Peter Xu <peterx@xxxxxxxxxx>
    Cc: <stable@xxxxxxxxxxxxxxx>
    Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
    [manual backport: this code was refactored from two copies into a common
    helper between 5.15 and 6.0]
    Signed-off-by: Jann Horn <jannh@xxxxxxxxxx>
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index cf4dceb9682b..014e8b259313 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1457,6 +1457,14 @@ void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr)
 	if (!hugepage_vma_check(vma, vma->vm_flags | VM_HUGEPAGE))
 		return;
 
+	/*
+	 * Symmetry with retract_page_tables(): Exclude MAP_PRIVATE mappings
+	 * that got written to. Without this, we'd have to also lock the
+	 * anon_vma if one exists.
+	 */
+	if (vma->anon_vma)
+		return;
+
 	hpage = find_lock_page(vma->vm_file->f_mapping,
 			       linear_page_index(vma, haddr));
 	if (!hpage)
@@ -1469,6 +1477,19 @@ void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr)
 	if (!pmd)
 		goto drop_hpage;
 
+	/*
+	 * We need to lock the mapping so that from here on, only GUP-fast and
+	 * hardware page walks can access the parts of the page tables that
+	 * we're operating on.
+	 */
+	i_mmap_lock_write(vma->vm_file->f_mapping);
+
+	/*
+	 * This spinlock should be unnecessary: Nobody else should be accessing
+	 * the page tables under spinlock protection here, only
+	 * lockless_pages_from_mm() and the hardware page walker can access page
+	 * tables while all the high-level locks are held in write mode.
+	 */
 	start_pte = pte_offset_map_lock(mm, pmd, haddr, &ptl);
 
 	/* step 1: check all mapped PTEs are to the right huge page */
@@ -1515,12 +1536,12 @@ void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr)
 	}
 
 	/* step 4: collapse pmd */
-	ptl = pmd_lock(vma->vm_mm, pmd);
 	_pmd = pmdp_collapse_flush(vma, haddr, pmd);
-	spin_unlock(ptl);
 	mm_dec_nr_ptes(mm);
 	pte_free(mm, pmd_pgtable(_pmd));
 
+	i_mmap_unlock_write(vma->vm_file->f_mapping);
+
 drop_hpage:
 	unlock_page(hpage);
 	put_page(hpage);
@@ -1528,6 +1549,7 @@ void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr)
 
 abort:
 	pte_unmap_unlock(start_pte, ptl);
+	i_mmap_unlock_write(vma->vm_file->f_mapping);
 	goto drop_hpage;
 }
 
@@ -1577,7 +1599,8 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
 		 * An alternative would be drop the check, but check that page
 		 * table is clear before calling pmdp_collapse_flush() under
 		 * ptl. It has higher chance to recover THP for the VMA, but
-		 * has higher cost too.
+		 * has higher cost too. It would also probably require locking
+		 * the anon_vma.
 		 */
 		if (vma->anon_vma)
 			continue;
@@ -1599,10 +1622,8 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
 		 */
 		if (mmap_write_trylock(mm)) {
 			if (!khugepaged_test_exit(mm)) {
-				spinlock_t *ptl = pmd_lock(mm, pmd);
 				/* assume page table is clear */
 				_pmd = pmdp_collapse_flush(vma, addr, pmd);
-				spin_unlock(ptl);
 				mm_dec_nr_ptes(mm);
 				pte_free(mm, pmd_pgtable(_pmd));
 			}



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux