The quilt patch titled Subject: mm,unmap: avoid flushing TLB in batch if PTE is inaccessible has been removed from the -mm tree. Its filename was mmunmap-avoid-flushing-tlb-in-batch-if-pte-is-inaccessible.patch This patch was dropped because it was merged into the mm-stable branch of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm ------------------------------------------------------ From: Huang Ying <ying.huang@xxxxxxxxx> Subject: mm,unmap: avoid flushing TLB in batch if PTE is inaccessible Date: Mon, 24 Apr 2023 14:54:08 +0800 0Day/LKP reported a performance regression for commit 7e12beb8ca2a ("migrate_pages: batch flushing TLB"). In the commit, the TLB flushing during page migration is batched. So, in try_to_migrate_one(), ptep_clear_flush() is replaced with set_tlb_ubc_flush_pending(). In further investigation, it is found that the TLB flushing can be avoided in ptep_clear_flush() if the PTE is inaccessible. In fact, we can optimize in similar way for the batched TLB flushing too to improve the performance. So in this patch, we check pte_accessible() before set_tlb_ubc_flush_pending() in try_to_unmap/migrate_one(). Tests show that the benchmark score of the anon-cow-rand-mt test case of vm-scalability test suite can improve up to 2.1% with the patch on a Intel server machine. The TLB flushing IPI can reduce up to 44.3%. Link: https://lore.kernel.org/oe-lkp/202303192325.ecbaf968-yujie.liu@xxxxxxxxx Link: https://lore.kernel.org/oe-lkp/ab92aaddf1b52ede15e2c608696c36765a2602c1.camel@xxxxxxxxx/ Link: https://lkml.kernel.org/r/20230424065408.188498-1-ying.huang@xxxxxxxxx Fixes: 7e12beb8ca2a ("migrate_pages: batch flushing TLB") Signed-off-by: "Huang, Ying" <ying.huang@xxxxxxxxx> Reported-by: kernel test robot <yujie.liu@xxxxxxxxx> Reviewed-by: Nadav Amit <namit@xxxxxxxxxx> Reviewed-by: Xin Hao <xhao@xxxxxxxxxxxxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Cc: Hugh Dickins <hughd@xxxxxxxxxx> Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/rmap.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) --- a/mm/rmap.c~mmunmap-avoid-flushing-tlb-in-batch-if-pte-is-inaccessible +++ a/mm/rmap.c @@ -642,10 +642,14 @@ void try_to_unmap_flush_dirty(void) #define TLB_FLUSH_BATCH_PENDING_LARGE \ (TLB_FLUSH_BATCH_PENDING_MASK / 2) -static void set_tlb_ubc_flush_pending(struct mm_struct *mm, bool writable) +static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval) { struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc; int batch; + bool writable = pte_dirty(pteval); + + if (!pte_accessible(mm, pteval)) + return; arch_tlbbatch_add_mm(&tlb_ubc->arch, mm); tlb_ubc->flush_required = true; @@ -729,7 +733,7 @@ void flush_tlb_batched_pending(struct mm } } #else -static void set_tlb_ubc_flush_pending(struct mm_struct *mm, bool writable) +static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval) { } @@ -1580,7 +1584,7 @@ static bool try_to_unmap_one(struct foli */ pteval = ptep_get_and_clear(mm, address, pvmw.pte); - set_tlb_ubc_flush_pending(mm, pte_dirty(pteval)); + set_tlb_ubc_flush_pending(mm, pteval); } else { pteval = ptep_clear_flush(vma, address, pvmw.pte); } @@ -1961,7 +1965,7 @@ static bool try_to_migrate_one(struct fo */ pteval = ptep_get_and_clear(mm, address, pvmw.pte); - set_tlb_ubc_flush_pending(mm, pte_dirty(pteval)); + set_tlb_ubc_flush_pending(mm, pteval); } else { pteval = ptep_clear_flush(vma, address, pvmw.pte); } _ Patches currently in -mm which might be from ying.huang@xxxxxxxxx are