On Thu, Jul 6, 2023 at 10:54 AM Isaku Yamahata <isaku.yamahata@xxxxxxxxx> wrote: > > On Tue, Jul 04, 2023 at 04:50:49PM +0900, > David Stevens <stevensd@xxxxxxxxxxxx> wrote: > > > From: David Stevens <stevensd@xxxxxxxxxxxx> > > > > Migrate from __gfn_to_pfn_memslot to __kvm_follow_pfn. > > > > Signed-off-by: David Stevens <stevensd@xxxxxxxxxxxx> > > --- > > arch/x86/kvm/mmu/mmu.c | 35 +++++++++++++++++++++++++---------- > > 1 file changed, 25 insertions(+), 10 deletions(-) > > > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > > index ec169f5c7dce..e44ab512c3a1 100644 > > --- a/arch/x86/kvm/mmu/mmu.c > > +++ b/arch/x86/kvm/mmu/mmu.c > > @@ -4296,7 +4296,12 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work) > > static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) > > { > > struct kvm_memory_slot *slot = fault->slot; > > - bool async; > > + struct kvm_follow_pfn foll = { > > + .slot = slot, > > + .gfn = fault->gfn, > > + .flags = FOLL_GET | (fault->write ? FOLL_WRITE : 0), > > + .allow_write_mapping = true, > > + }; > > > > /* > > * Retry the page fault if the gfn hit a memslot that is being deleted > > @@ -4325,12 +4330,14 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault > > return RET_PF_EMULATE; > > } > > > > - async = false; > > - fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, false, false, &async, > > - fault->write, &fault->map_writable, > > - &fault->hva); > > - if (!async) > > - return RET_PF_CONTINUE; /* *pfn has correct page already */ > > + foll.flags |= FOLL_NOWAIT; > > + fault->pfn = __kvm_follow_pfn(&foll); > > + > > + if (!is_error_noslot_pfn(fault->pfn)) > > We have pfn in struct kvm_follow_pfn as output. Can we make __kvm_follow_pfn() > return int instead of kvm_pfn_t? KVM_PFN_* seems widely used, though. Switching __kvm_follow_pfn to return an int isn't difficult, but doing is cleanly would require reworking the kvm_pfn_t/KVM_PFN_ERR_* api, which as you said is quite widely used. That's a bit larger scope than I want to do in this patch series. -David