Yicong Yang <yangyicong@xxxxxxxxxx> writes: Thanks! I was reading this code the other day and it took me a while to figure out what was going on. These comments would have been very helpful and match my understanding, so: Reviewed-by: Alistair Popple <apopple@xxxxxxxxxx> > From: Yicong Yang <yangyicong@xxxxxxxxxxxxx> > > Add comments for arch_flush_tlb_batched_pending() and > arch_tlbbatch_flush() to illustrate why only a DSB is > needed. > > Cc: Catalin Marinas <catalin.marinas@xxxxxxx> > Signed-off-by: Yicong Yang <yangyicong@xxxxxxxxxxxxx> > --- > arch/arm64/include/asm/tlbflush.h | 15 +++++++++++++++ > 1 file changed, 15 insertions(+) > > diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h > index 3456866c6a1d..2bad230b95b4 100644 > --- a/arch/arm64/include/asm/tlbflush.h > +++ b/arch/arm64/include/asm/tlbflush.h > @@ -300,11 +300,26 @@ static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *b > __flush_tlb_page_nosync(mm, uaddr); > } > > +/* > + * If mprotect/munmap/etc occurs during TLB batched flushing, we need to > + * synchronise all the TLBI issued by a DSB to avoid the race mentioned in > + * flush_tlb_batched_pending(). > + */ > static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm) > { > dsb(ish); > } > > +/* > + * To support TLB batched flush for multiple pages unmapping, we only send > + * the TLBI for each page in arch_tlbbatch_add_pending() and wait for the > + * completion at the end in arch_tlbbatch_flush(). Since we've already issued > + * TLBI for each page so only a DSB is needed to synchronise its effect on the > + * other CPUs. > + * > + * This will save the time waiting on DSB comparing issuing a TLBI;DSB sequence > + * for each page. > + */ > static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch) > { > dsb(ish);