[merged mm-stable] arm64-tlbflush-add-some-comments-for-tlb-batched-flushing.patch removed from -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The quilt patch titled
     Subject: arm64: tlbflush: add some comments for TLB batched flushing
has been removed from the -mm tree.  Its filename was
     arm64-tlbflush-add-some-comments-for-tlb-batched-flushing.patch

This patch was dropped because it was merged into the mm-stable branch
of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

------------------------------------------------------
From: Yicong Yang <yangyicong@xxxxxxxxxxxxx>
Subject: arm64: tlbflush: add some comments for TLB batched flushing
Date: Tue, 1 Aug 2023 20:42:03 +0800

Add comments for arch_flush_tlb_batched_pending() and
arch_tlbbatch_flush() to illustrate why only a DSB is needed.

Link: https://lkml.kernel.org/r/20230801124203.62164-1-yangyicong@xxxxxxxxxx
Cc: Catalin Marinas <catalin.marinas@xxxxxxx>
Signed-off-by: Yicong Yang <yangyicong@xxxxxxxxxxxxx>
Reviewed-by: Alistair Popple <apopple@xxxxxxxxxx>
Reviewed-by: Catalin Marinas <catalin.marinas@xxxxxxx>
Cc: Barry Song <21cnbao@xxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 arch/arm64/include/asm/tlbflush.h |   15 +++++++++++++++
 1 file changed, 15 insertions(+)

--- a/arch/arm64/include/asm/tlbflush.h~arm64-tlbflush-add-some-comments-for-tlb-batched-flushing
+++ a/arch/arm64/include/asm/tlbflush.h
@@ -304,11 +304,26 @@ static inline void arch_tlbbatch_add_pen
 	__flush_tlb_page_nosync(mm, uaddr);
 }
 
+/*
+ * If mprotect/munmap/etc occurs during TLB batched flushing, we need to
+ * synchronise all the TLBI issued with a DSB to avoid the race mentioned in
+ * flush_tlb_batched_pending().
+ */
 static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm)
 {
 	dsb(ish);
 }
 
+/*
+ * To support TLB batched flush for multiple pages unmapping, we only send
+ * the TLBI for each page in arch_tlbbatch_add_pending() and wait for the
+ * completion at the end in arch_tlbbatch_flush(). Since we've already issued
+ * TLBI for each page so only a DSB is needed to synchronise its effect on the
+ * other CPUs.
+ *
+ * This will save the time waiting on DSB comparing issuing a TLBI;DSB sequence
+ * for each page.
+ */
 static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
 {
 	dsb(ish);
_

Patches currently in -mm which might be from yangyicong@xxxxxxxxxxxxx are





[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux