From: Yicong Yang <yangyicong@xxxxxxxxxxxxx> Add comments for arch_flush_tlb_batched_pending() and arch_tlbbatch_flush() to illustrate why only a DSB is needed. Cc: Catalin Marinas <catalin.marinas@xxxxxxx> Signed-off-by: Yicong Yang <yangyicong@xxxxxxxxxxxxx> Reviewed-by: Alistair Popple <apopple@xxxxxxxxxx> Reviewed-by: Catalin Marinas <catalin.marinas@xxxxxxx> --- Change since v1: - Address one comment from Catalin: s/by a DSB/with a DSB/ - Add tags from Catalin and Alistair Link: https://lore.kernel.org/all/20230729131448.15531-1-yangyicong@xxxxxxxxxx/ arch/arm64/include/asm/tlbflush.h | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index 3456866c6a1d..bd1acaf7f715 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -300,11 +300,26 @@ static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *b __flush_tlb_page_nosync(mm, uaddr); } +/* + * If mprotect/munmap/etc occurs during TLB batched flushing, we need to + * synchronise all the TLBI issued with a DSB to avoid the race mentioned in + * flush_tlb_batched_pending(). + */ static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm) { dsb(ish); } +/* + * To support TLB batched flush for multiple pages unmapping, we only send + * the TLBI for each page in arch_tlbbatch_add_pending() and wait for the + * completion at the end in arch_tlbbatch_flush(). Since we've already issued + * TLBI for each page so only a DSB is needed to synchronise its effect on the + * other CPUs. + * + * This will save the time waiting on DSB comparing issuing a TLBI;DSB sequence + * for each page. + */ static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch) { dsb(ish); -- 2.24.0