We need to issue a DSB before doing TLB invalidation to make sure that the table walker sees the new VA mapping after the TLBI finishes. For flush_tlb_page, we do a DSB ISHST (synchronization barrier for writes in the Inner Shareable domain) because translation table walks are now coherent for arm. For local_flush_tlb_all, we only need to affect the Non-shareable domain, and we do a DSB NSHST. We need a synchronization barrier here, and not a memory ordering barrier, because a table walk is not a memory operation and therefore not affected by the DMB. For the same reasons, we downgrade the full system DSB after the TLBI to a DSB ISH (synchronization barrier for reads and writes in the Inner Shareable domain), and, respectively, DSB NSH (in the Non-shareable domain). With these two changes, our TLB maintenance functions now match what Linux does in __flush_tlb_kernel_page, and, respectively, in local_flush_tlb_all. A similar change was implemented in Linux commit 62cbbc42e001 ("ARM: tlb: reduce scope of barrier domains for TLB invalidation"). Signed-off-by: Alexandru Elisei <alexandru.elisei@xxxxxxx> --- lib/arm/asm/mmu.h | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/lib/arm/asm/mmu.h b/lib/arm/asm/mmu.h index 361f3cdcc3d5..2bf8965ed35e 100644 --- a/lib/arm/asm/mmu.h +++ b/lib/arm/asm/mmu.h @@ -17,9 +17,10 @@ static inline void local_flush_tlb_all(void) { + dsb(nshst); /* TLBIALL */ asm volatile("mcr p15, 0, %0, c8, c7, 0" :: "r" (0)); - dsb(); + dsb(nsh); isb(); } @@ -31,9 +32,10 @@ static inline void flush_tlb_all(void) static inline void flush_tlb_page(unsigned long vaddr) { + dsb(ishst); /* TLBIMVAAIS */ asm volatile("mcr p15, 0, %0, c8, c3, 3" :: "r" (vaddr)); - dsb(); + dsb(ish); isb(); } -- 2.7.4