Re: [PATCH 10/10] kvm: vmx: handle VMEXIT from SGX Enclave

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 5/11/2017 9:34 PM, Huang, Kai wrote:


On 5/8/2017 8:22 PM, Paolo Bonzini wrote:


On 08/05/2017 07:24, Kai Huang wrote:
@@ -6977,6 +7042,31 @@ static __exit void hardware_unsetup(void)
  */
 static int handle_pause(struct kvm_vcpu *vcpu)
 {
+    /*
+     * SDM 39.6.3 PAUSE Instruction.
+     *
+ * SDM suggests, if VMEXIT caused by 'PAUSE-loop exiting', VMM should
+     * disable 'PAUSE-loop exiting' so PAUSE can be executed in Enclave
+     * again without further PAUSE-looping VMEXIT.
+     *
+ * SDM suggests, if VMEXIT caused by 'PAUSE exiting', VMM should disable + * 'PAUSE exiting' so PAUSE can be executed in Enclave again without
+     * further PAUSE VMEXIT.
+     */

How is PLE re-enabled?

Currently it will not be enabled again. Probably we can re-enable it at another VMEXIT, if that VMEXIT is not PLE VMEXIT?

Hi Paolo, all,

Sorry for reply late.

Do you think it is feasible to turn on PLE again on another further VMEXIT? Or another VMEXIT that is not from enclave?

Any suggestions so that I can improve in next version RFC?



I don't understand the interaction of the internal control registers
(paragraph 41.1.4) with VMX.  How can you migrate the VM between EENTER
and EEXIT?

Current SGX hardware architecture doesn't support live migration, as the key architecture of SGX is not migratable. For example, some keys are persistent and bound to hardware (sealing and attestation). Therefore right now if SGX is exposed to guest, live migration is not supposed.

We recently had a discussion on this. We figured out that we are able to support SGX live migration with some kind of workaround -- basically the idea is we can ignore source VM's EPC and depend on destination VM's SGX driver and userspace SW stack to handle *sudden lose of EPC*, but this will cause some inconsistence with HW behavior, and will need to depend on driver's ability. I'll elaborate this in next version design and RFC and we can have a discussion whether to support it or not (along with snapshot support). But maybe we can also have a detailed discussion if you want to start now?

Thanks,
-Kai


In addition, paragraph 41.1.4 does not include the parts of CR_SAVE_FS*
and CR_SAVE_GS* (base, limit, access rights) and does not include
CR_ENCLAVE_ENTRY_IP.

CPU can exit enclave via EEXIT, or by Asynchronous Enclave Exit (AEX). All non-EEXIT enclave exit are referred as AEX. When AEX happens, a so called "synthetic state" is created on CPU to prevent any software from trying to observe *secret* from CPU status in AEX. What exactly will be pushed in "synthetic state" is in SDM 40.3.

So in my understanding, CPU won't put something like "CR_ENCLAVE_ENTRY_IP" to RIP. Actually during AEX, Asynchronous Exit Pointer (AEP), which is in normal memory, will be pushed to stack and IRET will return to AEP to continue to run. AEP typically points to some small piece of code which basically calls ERESUME so that we can go back to enclave to run.

Hope my reply answered your questions?

Thanks,
-Kai


Paolo

+    if (vmx_exit_from_enclave(vcpu)) {
+        u32 exec_ctl, secondary_exec_ctl;
+
+        exec_ctl = vmx_exec_control(to_vmx(vcpu));
+        exec_ctl &= ~CPU_BASED_PAUSE_EXITING;
+        vmcs_write32(CPU_BASED_VM_EXEC_CONTROL, exec_ctl);
+
+        secondary_exec_ctl = vmx_secondary_exec_control(to_vmx(vcpu));
+        secondary_exec_ctl &= ~SECONDARY_EXEC_PAUSE_LOOP_EXITING;
+        vmcs_set_secondary_exec_control(secondary_exec_ctl);
+
+        return 1;
+    }
+





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux