On Sun, 25 Oct 2020 01:27:39 +0100, Gavin Shan <gshan@xxxxxxxxxx> wrote: > > The huge page could be mapped through multiple contiguous PMDs or PTEs. > The corresponding huge page sizes aren't supported by the page table > walker currently. > > This fails the unsupported huge page sizes to the near one. Otherwise, > the guest can't boot successfully: CONT_PMD_SHIFT and CONT_PTE_SHIFT > fail back to PMD_SHIFT and PAGE_SHIFT separately. > > Signed-off-by: Gavin Shan <gshan@xxxxxxxxxx> > --- > arch/arm64/kvm/mmu.c | 8 ++++++++ > 1 file changed, 8 insertions(+) > > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c > index 0f51585adc04..81cbdc368246 100644 > --- a/arch/arm64/kvm/mmu.c > +++ b/arch/arm64/kvm/mmu.c > @@ -793,12 +793,20 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > vma_shift = PMD_SHIFT; > #endif > > + if (vma_shift == CONT_PMD_SHIFT) > + vma_shift = PMD_SHIFT; > + > if (vma_shift == PMD_SHIFT && > !fault_supports_stage2_huge_mapping(memslot, hva, PMD_SIZE)) { > force_pte = true; > vma_shift = PAGE_SHIFT; > } > > + if (vma_shift == CONT_PTE_SHIFT) { > + force_pte = true; > + vma_shift = PAGE_SHIFT; > + } > + > vma_pagesize = 1UL << vma_shift; > if (vma_pagesize == PMD_SIZE || vma_pagesize == PUD_SIZE) > fault_ipa &= ~(vma_pagesize - 1); Yup, nice catch. However, I think we should take this opportunity to rationalise the logic here, and catch future discrepancies (should someone add contiguous PUD or something similarly silly). How about something like this (untested): diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index cc323d96c9d4..d9a13a8a82e0 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -787,14 +787,31 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, vma_shift = PAGE_SHIFT; } - if (vma_shift == PUD_SHIFT && - !fault_supports_stage2_huge_mapping(memslot, hva, PUD_SIZE)) - vma_shift = PMD_SHIFT; + switch (vma_shift) { + case PUD_SHIFT: + if (fault_supports_stage2_huge_mapping(memslot, hva, PUD_SIZE)) + break; + fallthrough; - if (vma_shift == PMD_SHIFT && - !fault_supports_stage2_huge_mapping(memslot, hva, PMD_SIZE)) { - force_pte = true; + case CONT_PMD_SHIFT: + vma_shift = PMD_SHIFT; + fallthrough; + + case PMD_SHIFT: + if (fault_supports_stage2_huge_mapping(memslot, hva, PMD_SIZE)) + break; + fallthrough; + + case CONT_PTE_SHIFT: vma_shift = PAGE_SHIFT; + force_pte = true; + fallthrough; + + case PAGE_SHIFT: + break; + + default: + WARN_ONCE(1, "Unknown vma_shift %d", vma_shift); } vma_pagesize = 1UL << vma_shift; Thanks, M. -- Without deviation from the norm, progress is not possible. _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm