On Fri, Sep 11, 2020 at 08:51:27AM -0700, Sean Christopherson wrote: > On Fri, Sep 11, 2020 at 02:43:15PM +0300, Jarkko Sakkinen wrote: > > On Tue, Sep 08, 2020 at 10:30:33PM -0700, Sean Christopherson wrote: > > > > for (c = 0 ; c < addp.length; c += PAGE_SIZE) { > > > > - if (signal_pending(current)) { > > > > - ret = -EINTR; > > > > + if (c == SGX_MAX_ADD_PAGES_LENGTH || signal_pending(current)) { > > > > + ret = c; > > > > > > I don't have an opinion on returning count vs. EINTR, but I don't see the > > > point in arbitrarily capping the number of pages that can be added in a > > > single ioctl(). It doesn't provide any real protection, e.g. userspace > > > can simply restart the ioctl() with updated offsets and continue spamming > > > EADDs. We are relying on other limits, e.g. memcg, rlimits, etc... to > > > reign in malicious/broken userspace. > > > > > > There is nothing inherently dangerous about spending time in the kernel so > > > long as appropriate checks are made, e.g. for a pending signel and resched. > > > If we're missing checks, adding an arbitrary limit won't fix the underlying > > > problem, at least not in a deterministic way. > > > > > > If we really want a limit of some form, adding a knob to control the max > > > size of an enclave seems like the way to go. But even that is of dubious > > > value as I'd rather rely on existing limits for virtual and physical memory, > > > and add a proper EPC cgroup to account and limit EPC memory. > > > > It is better to have a contract in the API that the number of processed > > pages can be less than given, not unlike in syscalls such as write(). > > That can be handled by a comment, no? If we want to "enforce" the behavior, > I'd rather bail out of the loop after a random number of pages than have a > completely arbitrary limit. The arbitrary limit will create a contract of > its own and may lead to weird guest implementations. I don't understand. It is already a random number given that also signal can cause this. /Jarkko