On Mon, Feb 26, 2024 at 12:26:04AM -0800, isaku.yamahata@xxxxxxxxx wrote: > From: Isaku Yamahata <isaku.yamahata@xxxxxxxxx> > > @@ -1041,6 +1255,8 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) > struct kvm *kvm = vcpu->kvm; > struct tdp_iter iter; > struct kvm_mmu_page *sp; > + gfn_t raw_gfn; > + bool is_private = fault->is_private && kvm_gfn_shared_mask(kvm); Why not put the checking of kvm_gfn_shared_mask() earlier before determining fault->is_private? e.g. in kvm_mmu_page_fault(). > int ret = RET_PF_RETRY; > > kvm_mmu_hugepage_adjust(vcpu, fault); > @@ -1049,7 +1265,17 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) > > rcu_read_lock(); > > - tdp_mmu_for_each_pte(iter, mmu, fault->gfn, fault->gfn + 1) { > + raw_gfn = gpa_to_gfn(fault->addr); > + > + if (is_error_noslot_pfn(fault->pfn) || The checking for noslot pfn is not required after this cleanup series [1] from Sean, right? > + !kvm_pfn_to_refcounted_page(fault->pfn)) { What's the purpose of rejecting non-refcounted page (which is useful for trusted IO)? Besides, looks pages are allocated via kvm_faultin_pfn_private() if fault->is_private, so where is the non-refcounted page from? > + if (is_private) { > + rcu_read_unlock(); > + return -EFAULT; > + } > + } [1] https://lore.kernel.org/all/20240228024147.41573-1-seanjc@xxxxxxxxxx/ > + tdp_mmu_for_each_pte(iter, mmu, is_private, raw_gfn, raw_gfn + 1) { > int r; > > if (fault->nx_huge_page_workaround_enabled) ... > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index d399009ef1d7..e27c22449d85 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -201,6 +201,7 @@ struct page *kvm_pfn_to_refcounted_page(kvm_pfn_t pfn) > > return NULL; > } > +EXPORT_SYMBOL_GPL(kvm_pfn_to_refcounted_page); > > /* > * Switches to specified vcpu, until a matching vcpu_put() > -- > 2.25.1 > >