On Wed, 11 Jan 2023 11:22:30 +0100, Ard Biesheuvel wrote: > The purpose of this series is to remove any explicit cache maintenance > for coherency during early boot. Software managed coherency is error > prone and tedious, and running with the MMU off is generally bad for > performance, and it becomes unnecessary if we simply retain the > cacheable 1:1 mapping of all of system RAM provided by EFI, and use it > to populate the initial ID map page tables. After setting up this > preliminary ID map, we disable the MMU, drop to EL1, reprogram the MAIR, > TCR and SCTLR registers as before, and proceed as usual, avoiding the > need for any manipulations of memory while the MMU and caches are off. > > [...] Applied to arm64 (for-next/efi-boot-mmu-on), thanks! [1/6] arm64: head: Move all finalise_el2 calls to after __enable_mmu https://git.kernel.org/arm64/c/82e4958800c0 [2/6] arm64: kernel: move identity map out of .text mapping https://git.kernel.org/arm64/c/af7249b317e4 [3/6] arm64: head: record the MMU state at primary entry https://git.kernel.org/arm64/c/9d7c13e5dde3 [4/6] arm64: head: avoid cache invalidation when entering with the MMU on https://git.kernel.org/arm64/c/32b135a7fafe [5/6] arm64: head: Clean the ID map and the HYP text to the PoC if needed https://git.kernel.org/arm64/c/3dcf60bbfd28 [6/6] efi: arm64: enter with MMU and caches enabled https://git.kernel.org/arm64/c/617861703830 -- Catalin