Hi Will, The code looks much nicer with the el2 page table allocator. One minor nitpick below. On 8/25/20 10:39 AM, Will Deacon wrote: > Now that we have a shiny new page-table allocator, replace the hyp > page-table code with calls into the new API. This also allows us to > remove the extended idmap code, as we can now simply ensure that the > VA size is large enough to map everything we need. > > Cc: Marc Zyngier <maz@xxxxxxxxxx> > Cc: Quentin Perret <qperret@xxxxxxxxxx> > Signed-off-by: Will Deacon <will@xxxxxxxxxx> > --- > arch/arm64/include/asm/kvm_mmu.h | 78 +---- > arch/arm64/include/asm/kvm_pgtable.h | 5 + > arch/arm64/include/asm/pgtable-hwdef.h | 6 - > arch/arm64/include/asm/pgtable-prot.h | 6 - > arch/arm64/kvm/mmu.c | 414 +++---------------------- > 5 files changed, 45 insertions(+), 464 deletions(-) > > [..] > @@ -2356,6 +2028,7 @@ static int kvm_map_idmap_text(pgd_t *pgd) > int kvm_mmu_init(void) > { > int err; > + u32 hyp_va_bits; > > hyp_idmap_start = __pa_symbol(__hyp_idmap_text_start); > hyp_idmap_start = ALIGN_DOWN(hyp_idmap_start, PAGE_SIZE); > @@ -2369,6 +2042,8 @@ int kvm_mmu_init(void) > */ > BUG_ON((hyp_idmap_start ^ (hyp_idmap_end - 1)) & PAGE_MASK); > > + hyp_va_bits = 64 - ((idmap_t0sz & TCR_T0SZ_MASK) >> TCR_T0SZ_OFFSET); idmap_t0sz is defined in mm/mmu.c as: TCR_T0SZ(VA_BITS) = (UL(64) - VA_BITS) << TCR_T0SZ_OFFSET. Looks to me like hyp_va_bits == VA_BITS. Thanks, Alex _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm