On 23/04/2020 16:19, Paraschiv, Andra-Irina wrote:
The memory and CPUs are carved out of the primary VM, they are
dedicated for the enclave. The Nitro hypervisor running on the host
ensures memory and CPU isolation between the primary VM and the
enclave VM.
I hope you properly take into consideration Hyper-Threading speculative
side-channel vulnerabilities here.
i.e. Usually cloud providers designate each CPU core to be assigned to
run only vCPUs of specific guest. To avoid sharing a single CPU core
between multiple guests.
To handle this properly, you need to use some kind of core-scheduling
mechanism (Such that each CPU core either runs only vCPUs of enclave or
only vCPUs of primary VM at any given point in time).
In addition, can you elaborate more on how the enclave memory is carved
out of the primary VM?
Does this involve performing a memory hot-unplug operation from primary
VM or just unmap enclave-assigned guest physical pages from primary VM's
SLAT (EPT/NPT) and map them now only in enclave's SLAT?
Let me know if further clarifications are needed.
I don't quite understand why Enclave VM needs to be provisioned/teardown
during primary VM's runtime.
For example, an alternative could have been to just provision both
primary VM and Enclave VM on primary VM startup.
Then, wait for primary VM to setup a communication channel with Enclave
VM (E.g. via virtio-vsock).
Then, primary VM is free to request Enclave VM to perform various tasks
when required on the isolated environment.
Such setup will mimic a common Enclave setup. Such as Microsoft Windows
VBS EPT-based Enclaves (That all runs on VTL1). It is also similar to
TEEs running on ARM TrustZone.
i.e. In my alternative proposed solution, the Enclave VM is similar to
VTL1/TrustZone.
It will also avoid requiring introducing a new PCI device and driver.
-Liran