Re: [PATCH v9 3/3] x86/sgx: Fine grained SGX MCA behavior for virtualization

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2022-10-13 at 08:44 -0700, Dave Hansen wrote:
> On 10/13/22 07:40, Zhiquan Li wrote:
> > > > > What happens if a hypervisor *DOES* fork()?  What's the fallout?
> > > > This part originates from below discussion:
> > > > 
> > > > https://lore.kernel.org/linux-sgx/52dc7f50b68c99cecb9e1c3383d9c6d88734cd67.camel@xxxxxxxxx/#t
> > > > 
> > > > It intents to answer the question:
> > > > 
> > > >     Do you think the processes sharing the same enclave need to be
> > > >     killed, even they had not touched the EPC page with hardware error?
> > > > 
> > > > Dave, do you mean it's not appropriate to be put here?
> > > It's actually a pretty important point, but it's still out of the blue.
> > > 
> > > You also didn't answer my question.
> > Oh, sorry, I focused on answering "Why is this here?" but forgot to
> > answer "What's the fallout?"
> > 
> > It's a very good question.
> > 
> > Looks like Kai had answered it before:
> > 
> > 	For instance, an application can open /dev/sgx_vepc, mmap(), and
> > 	fork().  Then if the child does mmap() against the fd opened by
> > 	the parent, the child will share the same vepc with the parent.
> > 
> > 	...
> > 
> > 	Sharing virtual EPC instance will very likely unexpectedly break
> > 	enclaves in all VMs.
> 
> How, though?  This basically says, "I think things will break."  I want
> to know a few more specifics than that before we merge this.  There are
> ABI implications.

This is because virtual EPC is just a raw resource to guest, and how guest uses
virtual EPC to run enclaves is completely controlled by the guest.  When virtual
EPC is shared among VMs, it can happen that one guest _thinks_ one EPC page is
still free but in fact it has been already used by another VM as a valid enclave
page.  Also, one VM can just unconditionally sanitize all EPC pages before using
any of them (just like Linux does).  All of those can cause unexpected SGX
error, which can lead to failing to create enclave, and/or breaking existing
enclaves running in all VMs.

> 
> > https://lore.kernel.org/linux-sgx/52dc7f50b68c99cecb9e1c3383d9c6d88734cd67.camel@xxxxxxxxx/#t
> > 
> > Moreover, I checked the code in below scenario while child sharing the
> > virtual EPC instance with parent:
> > - child closes /dev/sgx_vepc earlier than parent.
> > - child re-mmap() /dev/sgx_vepc to the same memory region as parent.
> > - child mmap() /dev/sgx_vepc to different memory region.
> > - child accesses the memory region of mmap() inherited from parent.
> > 
> > It's just that the app messes itself up, the vepc instance is protected
> > very well.
> > Maybe there are other corner cases I've not considered?
> 
> ... and what happens when *THIS* patch is in play?  What if there is a
> machine check in SGX memory?

With this patch, when #MC happens on one virtual EPC page, it will be send to
the VM, and the behaviour inside a VM depends on guest's implementation.  But
anyway based on Tony's reply, the enclave will be marked as "bad" by hardware
and it will eventually be killed:

https://lore.kernel.org/linux-sgx/55ffd9475f5d46f68dd06c4323bec871@xxxxxxxxx/
https://lore.kernel.org/linux-sgx/5b6ad3e2af614caf9b41092797ffcd86@xxxxxxxxx/

If the virtual EPC is shared by other VMs, the worst case is when other VMs use
this bad EPC page (as we cannot take the bad EPC page out of VM for now), some
SGX error (ENCLS/ENCLU error) or #MC could happen again.  But this doesn't make
things worse, as when sharing virtual EPC page among VMs you are likely to break
enclaves in VMs anyway (as mentioned above).





[Index of Archives]     [AMD Graphics]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux