RE: [PATCH Part2 v5 27/45] KVM: SVM: Add KVM_SEV_SNP_LAUNCH_FINISH command

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



[AMD Official Use Only - General]

Hello Marc,

-----Original Message-----
From: Marc Orr <marcorr@xxxxxxxxxx> 
Sent: Wednesday, May 18, 2022 3:21 PM
To: Kalra, Ashish <Ashish.Kalra@xxxxxxx>
Cc: x86 <x86@xxxxxxxxxx>; LKML <linux-kernel@xxxxxxxxxxxxxxx>; kvm list <kvm@xxxxxxxxxxxxxxx>; linux-coco@xxxxxxxxxxxxxxx; linux-mm@xxxxxxxxx; Linux Crypto Mailing List <linux-crypto@xxxxxxxxxxxxxxx>; Thomas Gleixner <tglx@xxxxxxxxxxxxx>; Ingo Molnar <mingo@xxxxxxxxxx>; Joerg Roedel <jroedel@xxxxxxx>; Lendacky, Thomas <Thomas.Lendacky@xxxxxxx>; H. Peter Anvin <hpa@xxxxxxxxx>; Ard Biesheuvel <ardb@xxxxxxxxxx>; Paolo Bonzini <pbonzini@xxxxxxxxxx>; Sean Christopherson <seanjc@xxxxxxxxxx>; Vitaly Kuznetsov <vkuznets@xxxxxxxxxx>; Wanpeng Li <wanpengli@xxxxxxxxxxx>; Jim Mattson <jmattson@xxxxxxxxxx>; Andy Lutomirski <luto@xxxxxxxxxx>; Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx>; Sergio Lopez <slp@xxxxxxxxxx>; Peter Gonda <pgonda@xxxxxxxxxx>; Peter Zijlstra <peterz@xxxxxxxxxxxxx>; Srinivas Pandruvada <srinivas.pandruvada@xxxxxxxxxxxxxxx>; David Rientjes <rientjes@xxxxxxxxxx>; Dov Murik <dovmurik@xxxxxxxxxxxxx>; Tobin Feldman-Fitzthum <tobin@xxxxxxx>; Borislav Petkov <bp@xxxxxxxxx>; Roth, Michael <Michael.Roth@xxxxxxx>; Vlastimil Babka <vbabka@xxxxxxx>; Kirill A . Shutemov <kirill@xxxxxxxxxxxxx>; Andi Kleen <ak@xxxxxxxxxxxxxxx>; Tony Luck <tony.luck@xxxxxxxxx>; Sathyanarayanan Kuppuswamy <sathyanarayanan.kuppuswamy@xxxxxxxxxxxxxxx>; Alper Gun <alpergun@xxxxxxxxxx>
Subject: Re: [PATCH Part2 v5 27/45] KVM: SVM: Add KVM_SEV_SNP_LAUNCH_FINISH command

> @@ -2364,16 +2467,29 @@ static void sev_flush_guest_memory(struct 
> vcpu_svm *svm, void *va,  void sev_free_vcpu(struct kvm_vcpu *vcpu)  {
>         struct vcpu_svm *svm;
> +       u64 pfn;
>
>         if (!sev_es_guest(vcpu->kvm))
>                 return;
>
>         svm = to_svm(vcpu);
> +       pfn = __pa(svm->vmsa) >> PAGE_SHIFT;
>
>         if (vcpu->arch.guest_state_protected)
>                 sev_flush_guest_memory(svm, svm->vmsa, PAGE_SIZE);
> +
> +       /*
> +        * If its an SNP guest, then VMSA was added in the RMP entry as
> +        * a guest owned page. Transition the page to hyperivosr state
> +        * before releasing it back to the system.
> +        */
> +       if (sev_snp_guest(vcpu->kvm) &&
> +           host_rmp_make_shared(pfn, PG_LEVEL_4K, false))
> +               goto skip_vmsa_free;
> +
>         __free_page(virt_to_page(svm->vmsa));
>
> +skip_vmsa_free:
>         if (svm->ghcb_sa_free)
>                 kfree(svm->ghcb_sa);
>  }

>Hi Ashish. We're still working with this patch set internally. We found a bug that I wanted to report in this patch. Above, we need to flush the VMSA page, `svm->vmsa`, _after_ we call `host_rmp_make_shared()` to mark the page is shared. >Otherwise, the host gets an RMP violation when it tries to flush the guest-owned VMSA page.

>The bug was silent, at least on our Milan platforms, bef reo
>d45829b351ee6 ("KVM: SVM: Flush when freeing encrypted pages even on SME_COHERENT CPUs"), because the `sev_flush_guest_memory()` helper was a noop on platforms with the SME_COHERENT feature. However, after d45829b351ee6, we >unconditionally do the flush to keep the IO address space coherent. And then we hit this bug.

Yes I have already hit this bug and added a fix as below:

commit 944fba38cbd3baf1ece76197630bd45e83089f14
Author: Ashish Kalra <ashish.kalra@xxxxxxx>
Date:   Tue May 3 14:33:29 2022 +0000

    KVM: SVM: Fix VMSA flush for an SNP guest.
    
    If its an SNP guest, then VMSA was added in the RMP entry as
    a guest owned page and also removed from the kernel direct map
    so flush it later after it is transitioned back to hypervisor
    state and restored in the direct map.
    
    Signed-off-by: Ashish Kalra <ashish.kalra@xxxxxxx>

diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index cc7c34d8b0db..0f772a0f1d35 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -2840,27 +2840,23 @@ void sev_free_vcpu(struct kvm_vcpu *vcpu)
 
        svm = to_svm(vcpu);
 
-       if (vcpu->arch.guest_state_protected)
-               sev_flush_encrypted_page(vcpu, svm->sev_es.vmsa);
-
        /*
         * If its an SNP guest, then VMSA was added in the RMP entry as
         * a guest owned page. Transition the page to hyperivosr state
         * before releasing it back to the system.
+        * Also the page is removed from the kernel direct map, so flush it
+        * later after it is transitioned back to hypervisor state and
+        * restored in the direct map.
         */
        if (sev_snp_guest(vcpu->kvm)) {
                u64 pfn = __pa(svm->sev_es.vmsa) >> PAGE_SHIFT;
                if (host_rmp_make_shared(pfn, PG_LEVEL_4K, false))
                        goto skip_vmsa_free;
        }
 
+       if (vcpu->arch.guest_state_protected)
+               sev_flush_encrypted_page(vcpu, svm->sev_es.vmsa);
+
        __free_page(virt_to_page(svm->sev_es.vmsa));
 
 skip_vmsa_free:


This will be part of the next hypervisor patches which we will be posting next.
Thanks,
Ashish




[Index of Archives]     [Kernel]     [Gnu Classpath]     [Gnu Crypto]     [DM Crypt]     [Netfilter]     [Bugtraq]

  Powered by Linux