The patch titled Subject: mm/huge_memory: remove stale locking logic from __split_huge_pmd() has been removed from the -mm tree. Its filename was mm-huge_memory-remove-stale-locking-logic-from-__split_huge_pmd.patch This patch was dropped because it was merged into mainline or a subsystem tree ------------------------------------------------------ From: David Hildenbrand <david@xxxxxxxxxx> Subject: mm/huge_memory: remove stale locking logic from __split_huge_pmd() Let's remove the stale logic that was required for reuse_swap_page(). [akpm@xxxxxxxxxxxxxxxxxxxx: simplification, per Yang Shi] Link: https://lkml.kernel.org/r/20220131162940.210846-10-david@xxxxxxxxxx Signed-off-by: David Hildenbrand <david@xxxxxxxxxx> Acked-by: Vlastimil Babka <vbabka@xxxxxxx> Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx> Cc: Christoph Hellwig <hch@xxxxxx> Cc: David Rientjes <rientjes@xxxxxxxxxx> Cc: Don Dutile <ddutile@xxxxxxxxxx> Cc: Hugh Dickins <hughd@xxxxxxxxxx> Cc: Jan Kara <jack@xxxxxxx> Cc: Jann Horn <jannh@xxxxxxxxxx> Cc: Jason Gunthorpe <jgg@xxxxxxxxxx> Cc: John Hubbard <jhubbard@xxxxxxxxxx> Cc: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> Cc: Liang Zhang <zhangliang5@xxxxxxxxxx> Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxxxx> Cc: Mike Kravetz <mike.kravetz@xxxxxxxxxx> Cc: Mike Rapoport <rppt@xxxxxxxxxxxxx> Cc: Nadav Amit <nadav.amit@xxxxxxxxx> Cc: Oleg Nesterov <oleg@xxxxxxxxxx> Cc: Peter Xu <peterx@xxxxxxxxxx> Cc: Rik van Riel <riel@xxxxxxxxxxx> Cc: Roman Gushchin <roman.gushchin@xxxxxxxxx> Cc: Shakeel Butt <shakeelb@xxxxxxxxxx> Cc: Yang Shi <shy828301@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/huge_memory.c | 40 ++++------------------------------------ 1 file changed, 4 insertions(+), 36 deletions(-) --- a/mm/huge_memory.c~mm-huge_memory-remove-stale-locking-logic-from-__split_huge_pmd +++ a/mm/huge_memory.c @@ -2133,8 +2133,6 @@ void __split_huge_pmd(struct vm_area_str { spinlock_t *ptl; struct mmu_notifier_range range; - bool do_unlock_folio = false; - pmd_t _pmd; mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, address & HPAGE_PMD_MASK, @@ -2153,42 +2151,12 @@ void __split_huge_pmd(struct vm_area_str goto out; } -repeat: - if (pmd_trans_huge(*pmd)) { - if (!folio) { - folio = page_folio(pmd_page(*pmd)); - /* - * An anonymous page must be locked, to ensure that a - * concurrent reuse_swap_page() sees stable mapcount; - * but reuse_swap_page() is not used on shmem or file, - * and page lock must not be taken when zap_pmd_range() - * calls __split_huge_pmd() while i_mmap_lock is held. - */ - if (folio_test_anon(folio)) { - if (unlikely(!folio_trylock(folio))) { - folio_get(folio); - _pmd = *pmd; - spin_unlock(ptl); - folio_lock(folio); - spin_lock(ptl); - if (unlikely(!pmd_same(*pmd, _pmd))) { - folio_unlock(folio); - folio_put(folio); - folio = NULL; - goto repeat; - } - folio_put(folio); - } - do_unlock_folio = true; - } - } - } else if (!(pmd_devmap(*pmd) || is_pmd_migration_entry(*pmd))) - goto out; - __split_huge_pmd_locked(vma, pmd, range.start, freeze); + if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd) || + is_pmd_migration_entry(*pmd)) + __split_huge_pmd_locked(vma, pmd, range.start, freeze); + out: spin_unlock(ptl); - if (do_unlock_folio) - folio_unlock(folio); /* * No need to double call mmu_notifier->invalidate_range() callback. * They are 3 cases to consider inside __split_huge_pmd_locked(): _ Patches currently in -mm which might be from david@xxxxxxxxxx are