Re: [PATCH] x86/sgx: Roof the number of pages process in SGX_IOC_ENCLAVE_ADD_PAGES

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Sep 11, 2020 at 12:38:59PM -0500, Haitao Huang wrote:
> nOn Fri, 11 Sep 2020 10:51:27 -0500, Sean Christopherson
> <sean.j.christopherson@xxxxxxxxx> wrote:
> 
> > On Fri, Sep 11, 2020 at 02:43:15PM +0300, Jarkko Sakkinen wrote:
> > > On Tue, Sep 08, 2020 at 10:30:33PM -0700, Sean Christopherson wrote:
> > > > >  	for (c = 0 ; c < addp.length; c += PAGE_SIZE) {
> > > > > -		if (signal_pending(current)) {
> > > > > -			ret = -EINTR;
> > > > > +		if (c == SGX_MAX_ADD_PAGES_LENGTH || signal_pending(current)) {
> > > > > +			ret = c;
> > > >
> > > > I don't have an opinion on returning count vs. EINTR, but I don't
> > > see the
> > > > point in arbitrarily capping the number of pages that can be added
> > > in a
> > > > single ioctl().  It doesn't provide any real protection, e.g.
> > > userspace
> > > > can simply restart the ioctl() with updated offsets and continue
> > > spamming
> > > > EADDs.  We are relying on other limits, e.g. memcg, rlimits, etc... to
> > > > reign in malicious/broken userspace.
> > > >
> > > > There is nothing inherently dangerous about spending time in the
> > > kernel so
> > > > long as appropriate checks are made, e.g. for a pending signel and
> > > resched.
> > > > If we're missing checks, adding an arbitrary limit won't fix the
> > > underlying
> > > > problem, at least not in a deterministic way.
> > > >
> > > > If we really want a limit of some form, adding a knob to control
> > > the max
> > > > size of an enclave seems like the way to go.  But even that is of
> > > dubious
> > > > value as I'd rather rely on existing limits for virtual and
> > > physical memory,
> > > > and add a proper EPC cgroup to account and limit EPC memory.
> > > 
> > > It is better to have a contract in the API that the number of processed
> > > pages can be less than given, not unlike in syscalls such as write().
> > 
> > That can be handled by a comment, no?  If we want to "enforce" the
> > behavior,
> > I'd rather bail out of the loop after a random number of pages than have
> > a
> > completely arbitrary limit.  The arbitrary limit will create a contract
> > of
> > its own and may lead to weird guest implementations.
> 
> 
> I agree with Sean on potential issues with the arbitrary hard coded limit.
> Also returning -EINTR is better way to express to user space that operations
> are interrupted by signal and can be retried, which is a known pattern for
> this kind of situations.

In read() -EINTR is returned only when zero amount of data is processed.

Otherwise, it returns just the count.

/Jarkko



[Index of Archives]     [AMD Graphics]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux