Let's invalidate the TLB before enabling the MMU, not after, so we don't accidently use a stale TLB mapping. For arm, we add a TLBIALL operation, which applies only to the PE that executed the instruction [1]. For arm64, we already do that in asm_mmu_enable. We now find ourselves in a situation where we issue an extra invalidation after asm_mmu_enable returns. Remove this redundant call to tlb_flush_all. [1] ARM DDI 0406C.d, section B3.10.6 Reviewed-by: Andrew Jones <drjones@xxxxxxxxxx> Signed-off-by: Alexandru Elisei <alexandru.elisei@xxxxxxx> --- Changes: * Reworded the commit message a bit, but I figured the changes are small and so I kept the Reviewed-by. lib/arm/mmu.c | 1 - arm/cstart.S | 4 ++++ 2 files changed, 4 insertions(+), 1 deletion(-) diff --git a/lib/arm/mmu.c b/lib/arm/mmu.c index 161f7a8e607c..66a05d789386 100644 --- a/lib/arm/mmu.c +++ b/lib/arm/mmu.c @@ -57,7 +57,6 @@ void mmu_enable(pgd_t *pgtable) struct thread_info *info = current_thread_info(); asm_mmu_enable(__pa(pgtable)); - flush_tlb_all(); info->pgtable = pgtable; mmu_mark_enabled(info->cpu); diff --git a/arm/cstart.S b/arm/cstart.S index c8bbb4f0a2a2..748548c3b516 100644 --- a/arm/cstart.S +++ b/arm/cstart.S @@ -160,6 +160,10 @@ halt: .equ NMRR, 0xff000004 @ MAIR1 (from Linux kernel) .globl asm_mmu_enable asm_mmu_enable: + /* TLBIALL */ + mcr p15, 0, r2, c8, c7, 0 + dsb ish + /* TTBCR */ mrc p15, 0, r2, c2, c0, 2 orr r2, #(1 << 31) @ TTB_EAE -- 2.20.1 _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm