On Mon, Mar 07, 2022 at 03:16:57PM +0200, Jarkko Sakkinen wrote: > On Sun, Mar 06, 2022 at 10:43:31PM +0000, Matthew Wilcox wrote: > > On Sun, Mar 06, 2022 at 07:02:57PM +0200, Jarkko Sakkinen wrote: > > > So can I conclude from this that in general having populate available for > > > device memory is something horrid, or just the implementation path? > > > > You haven't even attempted to explain what the problem is you're trying > > to solve. You've shown up with some terrible code and said "Hey, is > > this a good idea". No, no, it's not. > > The problem is that in order to include memory to enclave, which is > essentially a reserved address range processes virtual address space > there's two steps into it: > > 1. Host side (kernel) does ENCLS[EAUG] to request a new page to be > added to the enclave. > 2. Enclave accepts request with ENCLU[EACCEPT] or ENCLU[EACCEPTCOPY]. > > In the current SGX2 patch set this taken care by the page fault > handler. I.e. the enclave calls ENCLU[EACCEPT] for an empty address > and the #PF handler then does EAUG for a single page. > > So if you want to process a batch of pages this generates O(n) > round-trips. > > So if there was a way pre-do a batch of EAUG's, that would allow > to load data to the enclave without causing page faults happening > constantly. > > One solution for this simply add ioctl: > > https://lore.kernel.org/linux-sgx/YiLRBglTEbu8cHP9@xxxxxx/T/#m195ec84bf85614a140abeee245c5118c22ace8f3 > > But in practice when you wanted to use it, you would setup the > parameters so that they match the mmap() range. So for pratical > user space API having mmap() take care of this would be much more > lean option. For something like Graphene [1] the lazy #PF based option is probably a way to go. For wasm runtime that we're doing in Enarx [2] we get better performance by having something like this. I.e. we most of the time take as much as we use. [1] https://github.com/gramineproject/graphene [2] https://enarx.dev/ BR, Jarkko