On Mon, Aug 15, 2022 at 4:01 PM David Matlack <dmatlack@xxxxxxxxxx> wrote: > > Try to handle faults on GFNs that do not have a backing memslot during > kvm_faultin_pfn(), rather than relying on the caller to call > handle_abnormal_pfn() right after kvm_faultin_pfn(). This reduces all of > the page fault paths by eliminating duplicate code. > > Opportunistically tweak the comment about handling gfn > host.MAXPHYADDR > to reflect that the effect of returning RET_PF_EMULATE at that point is > to avoid creating an MMIO SPTE for such GFNs. > > No functional change intended. > > Signed-off-by: David Matlack <dmatlack@xxxxxxxxxx> > --- > arch/x86/kvm/mmu/mmu.c | 55 +++++++++++++++++----------------- > arch/x86/kvm/mmu/paging_tmpl.h | 4 --- > 2 files changed, 27 insertions(+), 32 deletions(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c [...] > @@ -4181,6 +4185,9 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) > if (unlikely(is_error_pfn(fault->pfn))) > return kvm_handle_error_pfn(fault); > > + if (unlikely(!fault->slot)) > + return kvm_handle_noslot_fault(vcpu, fault, ACC_ALL); This is broken. This needs to be pte_access for the shadow paging case, not ACC_ALL. I remember now I had that in an earlier version but it got lost at some point when I was rebasing locally.