On Mon, May 20, 2024 at 10:16:54AM +0000, "Huang, Kai" <kai.huang@xxxxxxxxx> wrote: > On Wed, 2024-05-01 at 03:52 -0500, Michael Roth wrote: > > This will handle the RMP table updates needed to put a page into a > > private state before mapping it into an SEV-SNP guest. > > > > > > [...] > > > +int sev_gmem_prepare(struct kvm *kvm, kvm_pfn_t pfn, gfn_t gfn, int max_order) > > +{ > > + struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; > > + kvm_pfn_t pfn_aligned; > > + gfn_t gfn_aligned; > > + int level, rc; > > + bool assigned; > > + > > + if (!sev_snp_guest(kvm)) > > + return 0; > > + > > + rc = snp_lookup_rmpentry(pfn, &assigned, &level); > > + if (rc) { > > + pr_err_ratelimited("SEV: Failed to look up RMP entry: GFN %llx PFN %llx error %d\n", > > + gfn, pfn, rc); > > + return -ENOENT; > > + } > > + > > + if (assigned) { > > + pr_debug("%s: already assigned: gfn %llx pfn %llx max_order %d level %d\n", > > + __func__, gfn, pfn, max_order, level); > > + return 0; > > + } > > + > > + if (is_large_rmp_possible(kvm, pfn, max_order)) { > > + level = PG_LEVEL_2M; > > + pfn_aligned = ALIGN_DOWN(pfn, PTRS_PER_PMD); > > + gfn_aligned = ALIGN_DOWN(gfn, PTRS_PER_PMD); > > + } else { > > + level = PG_LEVEL_4K; > > + pfn_aligned = pfn; > > + gfn_aligned = gfn; > > + } > > + > > + rc = rmp_make_private(pfn_aligned, gfn_to_gpa(gfn_aligned), level, sev->asid, false); > > + if (rc) { > > + pr_err_ratelimited("SEV: Failed to update RMP entry: GFN %llx PFN %llx level %d error %d\n", > > + gfn, pfn, level, rc); > > + return -EINVAL; > > + } > > + > > + pr_debug("%s: updated: gfn %llx pfn %llx pfn_aligned %llx max_order %d level %d\n", > > + __func__, gfn, pfn, pfn_aligned, max_order, level); > > + > > + return 0; > > +} > > diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c > > index b70556608e8d..60783e9f2ae8 100644 > > --- a/arch/x86/kvm/svm/svm.c > > +++ b/arch/x86/kvm/svm/svm.c > > @@ -5085,6 +5085,8 @@ static struct kvm_x86_ops svm_x86_ops __initdata = { > > .vcpu_deliver_sipi_vector = svm_vcpu_deliver_sipi_vector, > > .vcpu_get_apicv_inhibit_reasons = avic_vcpu_get_apicv_inhibit_reasons, > > .alloc_apic_backing_page = svm_alloc_apic_backing_page, > > + > > + .gmem_prepare = sev_gmem_prepare, > > }; > > > > > > +Rick, Isaku, > > I am wondering whether this can be done in the KVM page fault handler? > > The reason that I am asking is KVM will introduce several new > kvm_x86_ops::xx_private_spte() ops for TDX to handle setting up the > private mapping, and I am wondering whether SNP can just reuse some of > them so we can avoid having this .gmem_prepare(): Although I can't speak for SNP folks, I guess those hooks doesn't make sense for them. I guess they want to stay away from directly modifying the TDP MMU to add hooks to the TDP MMU. Instead, They intentionally chose to add hooks to guest_memfd. Maybe it's possible for SNP to use those hooks, what's the benefit for SNP? If you're looking for the benefit to allow the hooks of the TDP MMU for shared page table, what about other vm type? SW_PROTECTED or future one? -- Isaku Yamahata <isaku.yamahata@xxxxxxxxx>