On Thu, Nov 15, 2018 at 10:00:02PM +0200, Jarkko Sakkinen wrote: > On Thu, Nov 08, 2018 at 04:46:03PM +0200, Jarkko Sakkinen wrote: > > On Wed, Nov 07, 2018 at 10:00:57AM -0800, Sean Christopherson wrote: > > > What do we gain by a single buffer vs. separate buffers? The ioctl() > > > would be slightly smaller but it seems like the actual code would be > > > more complex. > > > > I'm fine with either. It was just a suggestion. > > > > > The enclave build process also utilizes the backing as temp storage > > > to avoid having to alloc kernel memory when queueing pages to be added > > > by the worker thread (which reminds me that I wanted to document why a > > > worker thread is used). Keeping this behavior would effectively make > > > providing backing mandatory. > > > > Would it be a problem just allocate those pages with alloc_page() and > > free them in the worker thread? > > > > > Are there any potential complications with ENCLS consuming userspace > > > pointers? We'd have to wrap them with user_access_{begin,end}() and > > > probably tweak the fixup, but I assume having the fixup handler means > > > we're generally ok? > > > > Last time I did it I used get_user_pages() for pinning. I'm not sure > > why I should do anything but just re-use that. > > What about VA page swapping? Not saying that it'd have to be done right > now but we need to answer whether it is enclave local or a global asset. > If it is local it would also require an argument. > > I will most likely won't fix this for v17 because this detail needs > careful consideration. I wonder if you can map shmem file to process address space so that you get it accounted for the process? That would be optimal for us. This way this won't become an API issue. Yeah, as I started to implement this I realized these issues with the API side that will arise. Even doing vm_mmap() in the kernel code would be better than taking addresses through the ioctl. That is another option. /Jarkko