Re: [PATCH RFC v8 36/56] KVM: SVM: Add KVM_SEV_SNP_LAUNCH_FINISH command

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 20.02.23 19:38, Michael Roth wrote:
From: Brijesh Singh <brijesh.singh@xxxxxxx>

The KVM_SEV_SNP_LAUNCH_FINISH finalize the cryptographic digest and stores
it as the measurement of the guest at launch.

While finalizing the launch flow, it also issues the LAUNCH_UPDATE command
to encrypt the VMSA pages.

If its an SNP guest, then VMSA was added in the RMP entry as
a guest owned page and also removed from the kernel direct map
so flush it later after it is transitioned back to hypervisor
state and restored in the direct map.

Signed-off-by: Brijesh Singh <brijesh.singh@xxxxxxx>
Signed-off-by: Harald Hoyer <harald@xxxxxxxxxxx>
Signed-off-by: Ashish Kalra <ashish.kalra@xxxxxxx>
Signed-off-by: Michael Roth <michael.roth@xxxxxxx>
---
  .../virt/kvm/x86/amd-memory-encryption.rst    |  23 ++++
  arch/x86/kvm/svm/sev.c                        | 122 ++++++++++++++++++
  include/uapi/linux/kvm.h                      |  14 ++
  3 files changed, 159 insertions(+)

[...]


diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 03dd227f6090..515e22d0dc30 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -2280,6 +2280,109 @@ static int snp_launch_update(struct kvm *kvm, struct kvm_sev_cmd *argp)
                                       snp_launch_update_gfn_handler, argp);
  }

+static int snp_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+       struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+       struct sev_data_snp_launch_update data = {};
+       struct kvm_vcpu *vcpu;
+       unsigned long i;
+       int ret;
+
+       data.gctx_paddr = __psp_pa(sev->snp_context);
+       data.page_type = SNP_PAGE_TYPE_VMSA;
+
+       kvm_for_each_vcpu(i, vcpu, kvm) {
+               struct vcpu_svm *svm = to_svm(vcpu);
+               u64 pfn = __pa(svm->sev_es.vmsa) >> PAGE_SHIFT;
+
+               /* Perform some pre-encryption checks against the VMSA */
+               ret = sev_es_sync_vmsa(svm);
+               if (ret)
+                       return ret;
+
+               /* Transition the VMSA page to a firmware state. */
+               ret = rmp_make_private(pfn, -1, PG_LEVEL_4K, sev->asid, true);
+               if (ret)
+                       return ret;
+
+               /* Issue the SNP command to encrypt the VMSA */
+               data.address = __sme_pa(svm->sev_es.vmsa);
+               ret = __sev_issue_cmd(argp->sev_fd, SEV_CMD_SNP_LAUNCH_UPDATE,
+                                     &data, &argp->error);


There is no contract in KVM that dictates that the first entry in the vcpu list needs to be vcpu_id==0 (BSP). That means if you use a user space that spawns vCPUs in parallel on init, you will end up with the BSP behind APs in the LAUNCH_UPDATE order.

This is a problem because for LAUNCH_UPDATE, the order matters. BSP and AP vCPUs have different initial state and so if you want to reconstruct the launch digest, you need to ensure that the guest knows the order.

The easiest way I can think of to fix this is to call snp_launch_update_vmsa twice: Once filtering for vcpu_id == 0 and once for vcpu_id != 0.


Thanks,

Alex





Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879






[Index of Archives]     [Kernel]     [Gnu Classpath]     [Gnu Crypto]     [DM Crypt]     [Netfilter]     [Bugtraq]
  Powered by Linux