On 3/28/19 4:24 PM, Nathaniel McCallum wrote: > On Thu, Mar 28, 2019 at 4:55 PM Lendacky, Thomas > <Thomas.Lendacky@xxxxxxx> wrote: >> >> On 3/28/19 2:57 PM, Nathaniel McCallum wrote: >>> During the launch of an SEV-enabled guest, a measurement is taken of >>> all plaintext pages (KVM_SEV_LAUNCH_UPDATE_DATA) and, in SEV-ES, VCPU >>> state (KVM_SEV_LAUNCH_UPDATE_VMSA) injected into the guest. This >> >> I'm just going to address the SEV-ES/ LAUNCH_UPDATE_VMSA portion of this. >> And, just to be clear, there has been no SEV-ES support submitted yet, so >> the KVM_SEV_LAUNCH_UPDATE_VMSA doesn't exist yet. > > https://github.com/torvalds/linux/blob/master/include/uapi/linux/kvm.h#L1451 > > The kernel already exports a definition for it. Right, I'm meant from a function point of view. You receive a -EINVAL if you try to invoke it. > >>> measurement becomes part of the chain of trust reported to the guest >>> owner. >>> >>> Currently, the kernel passes these commands somewhat directly to the >>> firmware for measurement. Likewise, the firmware calculates the >>> measurement of the data in the order it is received. This is fragile. >>> It means that the entire burden for reproducing the measurement falls >>> on the hypervisor. In turn this means that slight, even unintentional, >>> changes of the ordering by the hypervisor can result in different >>> measurements on different hypervisor versions. >>> >>> There is also a secondary problem that, even though it controls CPU >>> state, the KVM_SEV_LAUNCH_UPDATE_VMSA is called on the KVM virtual >>> machine file descriptor. In order to support multiple vCPUs, you have> to call it once for each vCPU - in the order the vCPUs are assigned to >>> the guest. Hypervisors typically call this vCPU initialization code in >>> an independent thread, making this ordering somewhat cumbersome (QEMU >>> does this today). >>> >>> In response to these problems, I'd like to propose the following: >>> >>> 1. Calls to KVM_SEV_LAUNCH_UPDATE_VMSA are moved from the VM fd to the vCPU fd. >> >> Given #2 below, these calls aren't needed. >> >>> >>> 2. All calls to KVM_SEV_LAUNCH_UPDATE_DATA and >>> KVM_SEV_LAUNCH_UPDATE_VMSA are batched by the kernel. When >>> KVM_SEV_LAUNCH_MEASURE is called, the kernel forwards all batched >>> calls in a well defined order to the firmware. First, it would send >>> all LAUNCH_UPDATE_DATA commands (in guest address order?). Second, it >>> would send all LAUNCH_UPDATE_VMSA commands in vCPU # order. >> >> Since *ALL* vCPUs have to be measured for an SEV-ES guest, the kernel / >> hypervisor would just call LAUNCH_UPDATE_VMSA for every vCPU in vCPU# >> order when required (this is not done for an SEV guest). There would be >> no need to even have a KVM_SEV_LAUNCH_UPDATE_VMSA. > > Are you implying that when SEV-ES support lands the kernel will > implicitly call LAUNCH_UPDATE_VMSA during the hypervisor's > KVM_SEV_LAUNCH_MEASURE call? If so, the aforementioned definition for That is one possibility. Or KVM_SEV_LAUNCH_UPDATE_VMSA can be an explicit request to measure all vCPUs. It doesn't have to correspond one to one. KVM_SEV_LAUNCH_UPDATE_DATA can result in multiple LAUNCH_UPDATE_DATA firmware commands when the userspace data is not contiguous in memory. The definition of KVM_SEV_LAUNCH_UPDATE_VMSA can be to measure all instantiated vCPUs resulting in multiple LAUNCH_UPDATE_VMSA firmware commands. > KVM_SEV_LAUNCH_UPDATE_VMSA should be removed. I support the removal of > KVM_SEV_LAUNCH_UPDATE_VMSA from userspace altogether. But, yes, it would make it easier if we implicitly did the LAUNCH_UPDATE_VMSA for SEV-ES guests. > > I don't want the hypervisor to have to think about the ordering. The > kernel just needs to get it right. > > That still leaves us with KVM_SEV_LAUNCH_UPDATE_DATA batching. Well the kernel is the hypervisor, also. And there is always a lively discussion about what should be done in user-space vs what should be done in the kernel. Thanks, Tom >