On Mon, Jul 19, 2021, Brijesh Singh wrote: > > > On 7/19/21 11:54 AM, Sean Christopherson wrote: > > > As I said in previous comments that by default all the memory is in the > > > hypervisor state. if the rmpupdate() failed that means nothing is changed in > > > the RMP and there is no need to reclaim. The reclaim is required only if the > > > pages are assigned in the RMP table. > > > > I wasn't referring to RMPUPDATE failing here (or anywhere). This is the vCPU free > > path, which I think means the svm->vmsa page was successfully updated in the RMP > > during LAUNCH_UPDATE. snp_launch_update_vmsa() goes through snp_page_reclaim() > > on LAUNCH_UPDATE failure, whereas this happy path does not. Is there some other > > transition during teardown that obviastes the need for reclaim? If so, a comment > > to explain that would be very helpful. > > > > In this patch, the sev_free_vcpu() hunk takes care of reclaiming the vmsa > pages before releasing it. I think it will make it more obvious after I add > a helper so that we don't depend on user reading the comment block to see > what its doing. Where? I feel like I'm missing something. The only change to sev_free_vcpu() I see is that addition of the rmpupdate(), I don't see any reclaim path. @@ -2346,8 +2454,25 @@ void sev_free_vcpu(struct kvm_vcpu *vcpu) if (vcpu->arch.guest_state_protected) sev_flush_guest_memory(svm, svm->vmsa, PAGE_SIZE); + + /* + * If its an SNP guest, then VMSA was added in the RMP entry as a guest owned page. + * Transition the page to hyperivosr state before releasing it back to the system. + */ + if (sev_snp_guest(vcpu->kvm)) { + struct rmpupdate e = {}; + int rc; + + rc = rmpupdate(virt_to_page(svm->vmsa), &e); + if (rc) { + pr_err("Failed to release SNP guest VMSA page (rc %d), leaking it\n", rc); + goto skip_vmsa_free; + } + } + __free_page(virt_to_page(svm->vmsa)); +skip_vmsa_free: if (svm->ghcb_sa_free) kfree(svm->ghcb_sa); }