Hi Lior, On Thu, Dec 14, 2023 at 04:04:31PM +0000, Lior Weintraub wrote: > Hi, > > The below patch fixes the mmu_early_enable function to correctly map 40bits of virtual address into physical address with a 1:1 mapping. > It uses the init_range function to sets 2 table entries on TTB level0 and then fill level1 with the correct 1:1 mapping. > > This patch was merged from an older Barebox version into the most resent master (Commit: 975acf1bafba2366eb40c5e8d8cb732b53f27aa1). > Since it wasn't tested on the most recent master branch (lack of resources) I would appreciate if someone can test it on a 64bit ARM v8 platform. > > IMHO, the old implementation is wrong because: > 1. It tries to map the full range of VA (48bits) with 1:1 mapping but there are only maximum of 40 PA bits. > As a result, there is a wraparound that causes wrong mapping. > 2. TTB Level0 cannot have a block descriptor. Only table descriptor. > According "Learn the architecture - AArch64 memory management", Figure 6-1: Translation table format: > "Each entry is 64 bits and the bottom two bits determine the type of entry. > Notice that some of the table entries are only valid at specific levels. The maximum number of > levels of tables is four, which is why there is no table descriptor for level 3 (or the fourth level), > tables. Similarly, there are no Block descriptors or Page descriptors for level 0. Because level 0 > entry covers a large region of virtual address space, it does not make sense to allow blocks." > > Cheers, > Lior. > > From a98fa2bad05721fd4c3ceae4f63eedd90c29c244 Mon Sep 17 00:00:00 2001 > From: Lior Weintraub <liorw@xxxxxxxxxx> > Date: Thu, 14 Dec 2023 17:05:04 +0200 > Subject: [PATCH] [ARM64][MMU] Fix mmu_early_enable VA->PA mapping > > --- > arch/arm/cpu/mmu_64.c | 17 ++++++++++++++++- > arch/arm/cpu/mmu_64.h | 19 +++++++++++++++++-- > arch/arm/include/asm/pgtable64.h | 1 + > 3 files changed, 34 insertions(+), 3 deletions(-) > > diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c > index c6ea63e655..f35c1b5937 100644 > --- a/arch/arm/cpu/mmu_64.c > +++ b/arch/arm/cpu/mmu_64.c > @@ -294,6 +294,19 @@ void dma_flush_range(void *ptr, size_t size) > v8_flush_dcache_range(start, end); > } > > +void init_range(void *virt_addr, size_t size) > +{ > + uint64_t *ttb = get_ttb(); > + uint64_t addr = (uint64_t)virt_addr; > + while(size) { > + remap_range((void *)addr, L0_XLAT_SIZE, MAP_UNCACHED); This should be early_remap_range(). remap_range() is not safe to be called at this early stage and breaks running with qemu: https://github.com/barebox/barebox/actions/runs/7246849041/job/19739568067 When sending a correct version, could you please add your signed-off-by to the patch? Also, please send the patch as a new thread, not in response to another mail. Sascha -- Pengutronix e.K. | | Steuerwalder Str. 21 | http://www.pengutronix.de/ | 31137 Hildesheim, Germany | Phone: +49-5121-206917-0 | Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |