6.8-stable review patch. If anyone has any objections, please let me know. ------------------ From: Ard Biesheuvel <ardb@xxxxxxxxxx> commit 34e526cb7d46726b2ae5f83f2892d00ebb088509 upstream. Even though the boot protocol stipulates otherwise, an exception has been made for the EFI stub, and entering the core kernel with the MMU enabled is permitted. This allows a substantial amount of cache maintenance to be elided, wich is significant when fast boot times are critical (e.g., for booting micro-VMs) Once the initial ID map has been populated, the MMU is disabled as part of the logic sequence that puts all system registers into a known state. Any code that needs to execute within the window where the MMU is off is cleaned to the PoC explicitly, which includes all of HYP text when entering at EL2. However, the current sequence of initializing the EL2 system registers is not safe: HCR_EL2 is set to its nVHE initial state before SCTLR_EL2 is reprogrammed, and this means that a VHE-to-nVHE switch may occur while the MMU is enabled. This switch causes some system registers as well as page table descriptors to be interpreted in a different way, potentially resulting in spurious exceptions relating to MMU translation. So disable the MMU explicitly first when entering in EL2 with the MMU and caches enabled. Fixes: 617861703830 ("efi: arm64: enter with MMU and caches enabled") Signed-off-by: Ard Biesheuvel <ardb@xxxxxxxxxx> Cc: <stable@xxxxxxxxxxxxxxx> # 6.3.x Acked-by: Mark Rutland <mark.rutland@xxxxxxx> Acked-by: Marc Zyngier <maz@xxxxxxxxxx> Link: https://lore.kernel.org/r/20240415075412.2347624-6-ardb+git@xxxxxxxxxx Signed-off-by: Catalin Marinas <catalin.marinas@xxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- arch/arm64/kernel/head.S | 5 +++++ 1 file changed, 5 insertions(+) --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -569,6 +569,11 @@ SYM_INNER_LABEL(init_el2, SYM_L_LOCAL) adr_l x1, __hyp_text_end adr_l x2, dcache_clean_poc blr x2 + + mov_q x0, INIT_SCTLR_EL2_MMU_OFF + pre_disable_mmu_workaround + msr sctlr_el2, x0 + isb 0: mov_q x0, HCR_HOST_NVHE_FLAGS msr hcr_el2, x0