From: Ashish Kalra <ashish.kalra@xxxxxxx> With SNP/guest_memfd, private/encrypted memory should not be mappable, and MMU notifications for HVA-mapped memory will only be relevant to unencrypted guest memory. Therefore, the rationale behind issuing a wbinvd_on_all_cpus() in sev_guest_memory_reclaimed() should not apply for SNP guests and can be ignored. Signed-off-by: Ashish Kalra <ashish.kalra@xxxxxxx> [mdr: Add some clarifications in commit] Signed-off-by: Michael Roth <michael.roth@xxxxxxx> --- arch/x86/kvm/svm/sev.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 6c6d5a320d72..f027def3a79e 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -2852,7 +2852,14 @@ static void sev_flush_encrypted_page(struct kvm_vcpu *vcpu, void *va) void sev_guest_memory_reclaimed(struct kvm *kvm) { - if (!sev_guest(kvm)) + /* + * With SNP+gmem, private/encrypted memory should be + * unreachable via the hva-based mmu notifiers. Additionally, + * for shared->private translations, H/W coherency will ensure + * first guest access to the page would clear out any existing + * dirty copies of that cacheline. + */ + if (!sev_guest(kvm) || sev_snp_guest(kvm)) return; wbinvd_on_all_cpus(); -- 2.25.1