On Mon, Mar 11, 2024 at 10:29:01AM -0700, Sean Christopherson <seanjc@xxxxxxxxxx> wrote: > On Fri, Mar 01, 2024, isaku.yamahata@xxxxxxxxx wrote: > > From: Isaku Yamahata <isaku.yamahata@xxxxxxxxx> > > > > Introduce a helper function to call kvm fault handler. This allows > > a new ioctl to invoke kvm fault handler to populate without seeing > > RET_PF_* enums or other KVM MMU internal definitions. > > > > Signed-off-by: Isaku Yamahata <isaku.yamahata@xxxxxxxxx> > > --- > > arch/x86/kvm/mmu.h | 3 +++ > > arch/x86/kvm/mmu/mmu.c | 30 ++++++++++++++++++++++++++++++ > > 2 files changed, 33 insertions(+) > > > > diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h > > index 60f21bb4c27b..48870c5e08ec 100644 > > --- a/arch/x86/kvm/mmu.h > > +++ b/arch/x86/kvm/mmu.h > > @@ -183,6 +183,9 @@ static inline void kvm_mmu_refresh_passthrough_bits(struct kvm_vcpu *vcpu, > > __kvm_mmu_refresh_passthrough_bits(vcpu, mmu); > > } > > > > +int kvm_mmu_map_page(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, > > + u8 max_level, u8 *goal_level); > > + > > /* > > * Check if a given access (described through the I/D, W/R and U/S bits of a > > * page fault error code pfec) causes a permission fault with the given PTE > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > > index e4cc7f764980..7d5e80d17977 100644 > > --- a/arch/x86/kvm/mmu/mmu.c > > +++ b/arch/x86/kvm/mmu/mmu.c > > @@ -4659,6 +4659,36 @@ int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) > > return direct_page_fault(vcpu, fault); > > } > > > > +int kvm_mmu_map_page(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, > > + u8 max_level, u8 *goal_level) > > +{ > > + struct kvm_page_fault fault = KVM_PAGE_FAULT_INIT(vcpu, gpa, error_code, > > + false, max_level); > > + int r; > > + > > + r = __kvm_mmu_do_page_fault(vcpu, &fault); > > + > > + if (is_error_noslot_pfn(fault.pfn) || vcpu->kvm->vm_bugged) > > + return -EFAULT; > > This clobbers a non-zero 'r'. And KVM return -EIO if the VM is bugged/dead, not > -EFAULT. I also don't see why KVM needs to explicitly check is_error_noslot_pfn(), > that should be funneled to RET_PF_EMULATE. I'll drop this check. > > + > > + switch (r) { > > + case RET_PF_RETRY: > > + return -EAGAIN; > > + > > + case RET_PF_FIXED: > > + case RET_PF_SPURIOUS: > > + *goal_level = fault.goal_level; > > + return 0; > > + > > + case RET_PF_CONTINUE: > > + case RET_PF_EMULATE: > > -EINVAL woud be more appropriate for RET_PF_EMULATE. > > > + case RET_PF_INVALID: > > CONTINUE and INVALID should be WARN conditions. Will update them. -- Isaku Yamahata <isaku.yamahata@xxxxxxxxx>