Let's invalidate the TLB before enabling the MMU, not after, so we don't accidently use a stale TLB mapping. For arm, we add a TLBIALL operation, which applies only to the PE that executed the instruction [1]. For arm64, we already do that in asm_mmu_enable. We now find ourselves in a situation where we issue an extra invalidation after asm_mmu_enable returns. Remove this redundant call to tlb_flush_all. [1] ARM DDI 0406C.d, section B3.10.6 Reviewed-by: Andrew Jones <drjones@xxxxxxxxxx> Signed-off-by: Alexandru Elisei <alexandru.elisei@xxxxxxx> --- lib/arm/mmu.c | 1 - arm/cstart.S | 4 ++++ 2 files changed, 4 insertions(+), 1 deletion(-) diff --git a/lib/arm/mmu.c b/lib/arm/mmu.c index 773c764c4836..530d6b825398 100644 --- a/lib/arm/mmu.c +++ b/lib/arm/mmu.c @@ -59,7 +59,6 @@ void mmu_enable(pgd_t *pgtable) struct thread_info *info = current_thread_info(); asm_mmu_enable(__pa(pgtable)); - flush_tlb_all(); info->pgtable = pgtable; mmu_mark_enabled(info->cpu); diff --git a/arm/cstart.S b/arm/cstart.S index 3c2a3bcde61a..32b2b4f03098 100644 --- a/arm/cstart.S +++ b/arm/cstart.S @@ -161,6 +161,10 @@ halt: .equ NMRR, 0xff000004 @ MAIR1 (from Linux kernel) .globl asm_mmu_enable asm_mmu_enable: + /* TLBIALL */ + mcr p15, 0, r2, c8, c7, 0 + dsb nsh + /* TTBCR */ ldr r2, =(TTBCR_EAE | \ TTBCR_SH0_SHARED | \ -- 2.20.1