> @@ -2364,16 +2467,29 @@ static void sev_flush_guest_memory(struct vcpu_svm *svm, void *va, > void sev_free_vcpu(struct kvm_vcpu *vcpu) > { > struct vcpu_svm *svm; > + u64 pfn; > > if (!sev_es_guest(vcpu->kvm)) > return; > > svm = to_svm(vcpu); > + pfn = __pa(svm->vmsa) >> PAGE_SHIFT; > > if (vcpu->arch.guest_state_protected) > sev_flush_guest_memory(svm, svm->vmsa, PAGE_SIZE); > + > + /* > + * If its an SNP guest, then VMSA was added in the RMP entry as > + * a guest owned page. Transition the page to hyperivosr state > + * before releasing it back to the system. > + */ > + if (sev_snp_guest(vcpu->kvm) && > + host_rmp_make_shared(pfn, PG_LEVEL_4K, false)) > + goto skip_vmsa_free; > + > __free_page(virt_to_page(svm->vmsa)); > > +skip_vmsa_free: > if (svm->ghcb_sa_free) > kfree(svm->ghcb_sa); > } Hi Ashish. We're still working with this patch set internally. We found a bug that I wanted to report in this patch. Above, we need to flush the VMSA page, `svm->vmsa`, _after_ we call `host_rmp_make_shared()` to mark the page is shared. Otherwise, the host gets an RMP violation when it tries to flush the guest-owned VMSA page. The bug was silent, at least on our Milan platforms, bef reo d45829b351ee6 ("KVM: SVM: Flush when freeing encrypted pages even on SME_COHERENT CPUs"), because the `sev_flush_guest_memory()` helper was a noop on platforms with the SME_COHERENT feature. However, after d45829b351ee6, we unconditionally do the flush to keep the IO address space coherent. And then we hit this bug. Thanks, Marc