Re: [intel-sgx-kernel-dev] [PATCH 08/10] kvm: vmx: add guest's IA32_SGXLEPUBKEYHASHn runtime switch support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 6/16/2017 4:33 PM, Huang, Kai wrote:


On 6/16/2017 4:11 PM, Andy Lutomirski wrote:
On Thu, Jun 15, 2017 at 8:46 PM, Huang, Kai <kai.huang@xxxxxxxxxxxxxxx> wrote:


On 6/13/2017 11:00 AM, Andy Lutomirski wrote:

On Mon, Jun 12, 2017 at 3:08 PM, Huang, Kai <kai.huang@xxxxxxxxxxxxxxx>
wrote:


I don't know whether SGX driver will have restrict on running
provisioning
enclave. In my understanding provisioning enclave is always from Intel.
However I am not expert here and probably be wrong. Can you point out
*exactly* what restricts in host must/should be applied to guest so that Jarkko can know whether he will support those restricts or not? Otherwise
I
don't think we even need to talk about this topic at current stage.


The whole point is that I don't know.  But here are two types of
restriction I can imagine demand for:

1. Only a particular approved provisioning enclave may run (be it
Intel's or otherwise -- with a non-Intel LE, I think you can launch a
non-Intel provisioning enclave).  This would be done to restrict what
types of remote attestation can be done. (Intel supplies a remote
attestation service that uses some contractual policy that I don't
know.  Maybe a system owner wants a different policy applied to ISVs.)
Imposing this policy on guests more or less requires filtering EINIT.


Hi Andy,

Sorry for late reply.

What is the issue if host and guest run provisioning enclave from different vendor, for example, host runs intel's provisioning enclave, and guest runs other vendor's provisioning enclave? Or different guests run provisioning
enclaves from different vendors?

There's no issue unless someone has tried to impose a policy.  There
is clearly at least some interest in having policies that affect what
enclaves can run -- otherwise there wouldn't be LEs in the first
place.


One reason I am asking is that, on Xen (where we don't have concept of
*host*), it's likely that we won't apply any policy at Xen hypervisor at
all, and guests will be able to run any enclave from any signer as their
wish.

That seems entirely reasonable.  Someone may eventually ask Xen to add
support for SGX enclave restrictions, in which case you'll either have
to tell them that it won't happen or implement it.


Sorry that I don't understand (or kind of forgot) the issues here.


2. For kiosk-ish or single-purpose applications, I can imagine that
you would want to allow a specific list of enclave signers or even
enclave hashes. Maybe you would allow exactly one enclave hash.  You
could kludge this up with a restrictive LE policy, but you could also
do it for real by implementing the specific restriction in the kernel.
Then you'd want to impose it on the guest, and you'd do it by
filtering EINIT.

Assuming the enclave hash means measurement of enclave, and assuming we have a policy that we only allow enclave from one signer to run, would you also
elaborate the issue that, if host and guest run enclaves from different
signer? If host has such policy, and we are allowing creating guests on such
host, I think that typically we will have the same policy in the guest

Yes, I presume this too, but.

(vetted by guest's kernel). The owner of that host should be aware of the
risk (if there's any) by creating guest and run enclave inside it.

No.  The host does not trust the guest in general.  If the host has a

I agree.

policy that the only enclave that shall run is X, that doesn't mean
that the host shall reject all enclaves requested by the normal
userspace API except X but that, if /dev/kvm is used, then the user is
magically trusted to not load a guest that fails to respect the host
policy.  It means that the only enclave that shall run is X regardless
of what interface is used.  The host must only allow X to be loaded by
its userspace and the host must only allow X to be loaded by a guest.


This is theoretical thing. I think your statement makes sense only if we have specific example that can prove there's actual risk when allowing guest to exceed X approved by host.

I will dig more in your previous emails to see whether you have listed such real cases (I some kind forgot sorry) but if you don't mind, you can list such cases here.

Hi Andy,

I can find an example you listed in your previous email but it is not related to host policy but related to SGX's key architecture issue. I quoted below:

"Concretely, imagine I write an enclave that seals my TLS client
certificate's private key and offers an API to sign TLS certificate
requests with it.  This way, if my system is compromised, an attacker
can use the certificate only so long as they have access to my
machine.  If I kick them out or if they merely get the ability to read
the sealed data but not to execute code, the private key should still
be safe.  But, if this system is a VM guest, the attacker could run
the exact same enclave on another guest on the same physical CPU and
sign using my key.  Whoops!"

I think you will have this problem even you apply the most strict policy at both host and guest -- only allow one enclave from one signer to run. This is indeed a flaw but virtualization cannot do anything to solve this -- unless we don't support virtualization at all :)

Sorry I am just trying to find out whether there's real case that really require we apply host's policy to guest and will have problem if we don't.

Thanks,
-Kai


Thanks,
-Kai



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux