Hi Marc, On 7/17/21 10:55 AM, Marc Zyngier wrote: > Since we only support PMD-sized mappings for THP, getting > a permission fault on a level that results in a mapping > being larger than PAGE_SIZE is a sure indication that we have > already upgraded our mapping to a PMD. > > In this case, there is no need to try and parse userspace page > tables, as the fault information already tells us everything. > > Signed-off-by: Marc Zyngier <maz@xxxxxxxxxx> > --- > arch/arm64/kvm/mmu.c | 11 ++++++++--- > 1 file changed, 8 insertions(+), 3 deletions(-) > > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c > index db6314b93e99..c036a480ca27 100644 > --- a/arch/arm64/kvm/mmu.c > +++ b/arch/arm64/kvm/mmu.c > @@ -1088,9 +1088,14 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > * If we are not forced to use page mapping, check if we are > * backed by a THP and thus use block mapping if possible. > */ > - if (vma_pagesize == PAGE_SIZE && !(force_pte || device)) > - vma_pagesize = transparent_hugepage_adjust(kvm, memslot, hva, > - &pfn, &fault_ipa); > + if (vma_pagesize == PAGE_SIZE && !force_pte) { Looks like now it's possible to call transparent_hugepage_adjust() for devices (if fault_status != FSC_PERM). Commit 2aa53d68cee6 ("KVM: arm64: Try stage2 block mapping for host device MMIO") makes a good case for the !device check. Was the check removed unintentionally? > + if (fault_status == FSC_PERM && fault_granule > PAGE_SIZE) > + vma_pagesize = fault_granule; > + else > + vma_pagesize = transparent_hugepage_adjust(kvm, memslot, > + hva, &pfn, > + &fault_ipa); > + } This change makes sense to me - we can only get stage 2 permission faults on a leaf entry since stage 2 tables don't have the APTable/XNTable/PXNTable bits. The biggest block mapping size that we support at stage 2 is PMD size (from transparent_hugepage_adjust()), therefore if fault_granule is larger than PAGE_SIZE, then it must be PMD_SIZE. Thanks, Alex > > if (fault_status != FSC_PERM && !device && kvm_has_mte(kvm)) { > /* Check the VMM hasn't introduced a new VM_SHARED VMA */ _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm