On 12/12/2023 11:35, Will Deacon wrote: > On Mon, Dec 04, 2023 at 10:54:37AM +0000, Ryan Roberts wrote: >> Split __flush_tlb_range() into __flush_tlb_range_nosync() + >> __flush_tlb_range(), in the same way as the existing flush_tlb_page() >> arrangement. This allows calling __flush_tlb_range_nosync() to elide the >> trailing DSB. Forthcoming "contpte" code will take advantage of this >> when clearing the young bit from a contiguous range of ptes. >> >> Signed-off-by: Ryan Roberts <ryan.roberts@xxxxxxx> >> --- >> arch/arm64/include/asm/tlbflush.h | 13 +++++++++++-- >> 1 file changed, 11 insertions(+), 2 deletions(-) >> >> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h >> index bb2c2833a987..925ef3bdf9ed 100644 >> --- a/arch/arm64/include/asm/tlbflush.h >> +++ b/arch/arm64/include/asm/tlbflush.h >> @@ -399,7 +399,7 @@ do { \ >> #define __flush_s2_tlb_range_op(op, start, pages, stride, tlb_level) \ >> __flush_tlb_range_op(op, start, pages, stride, 0, tlb_level, false) >> >> -static inline void __flush_tlb_range(struct vm_area_struct *vma, >> +static inline void __flush_tlb_range_nosync(struct vm_area_struct *vma, >> unsigned long start, unsigned long end, >> unsigned long stride, bool last_level, >> int tlb_level) >> @@ -431,10 +431,19 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma, >> else >> __flush_tlb_range_op(vae1is, start, pages, stride, asid, tlb_level, true); >> >> - dsb(ish); >> mmu_notifier_arch_invalidate_secondary_tlbs(vma->vm_mm, start, end); >> } >> >> +static inline void __flush_tlb_range(struct vm_area_struct *vma, >> + unsigned long start, unsigned long end, >> + unsigned long stride, bool last_level, >> + int tlb_level) >> +{ >> + __flush_tlb_range_nosync(vma, start, end, stride, >> + last_level, tlb_level); >> + dsb(ish); >> +} > > Hmm, are you sure it's safe to defer the DSB until after the secondary TLB > invalidation? It will have a subtle effect on e.g. an SMMU participating > in broadcast TLB maintenance, because now the ATC will be invalidated > before completion of the TLB invalidation and it's not obviously safe to me. I'll be honest; I don't know that it's safe. The notifier calls turned up during a rebase and I stared at it for a while, before eventually concluding that I should just follow the existing pattern in __flush_tlb_page_nosync(): That one calls the mmu notifier without the dsb, then flush_tlb_page() does the dsb after. So I assumed it was safe. If you think it's not safe, I guess there is a bug to fix in __flush_tlb_page_nosync()? > > Will