Re: [PATCH v3] x86/sgx: Fix sgx_encl_may_map locking

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Oct 05, 2020 at 08:55:19AM -0700, Sean Christopherson wrote:
> On Mon, Oct 05, 2020 at 05:11:19PM +0300, Jarkko Sakkinen wrote:
> > @@ -317,11 +319,30 @@ int sgx_encl_may_map(struct sgx_encl *encl, unsigned long start,
> >  	if (current->personality & READ_IMPLIES_EXEC)
> >  		return -EACCES;
> >  
> > -	xas_for_each(&xas, page, idx_end)
> > -		if (!page || (~page->vm_max_prot_bits & vm_prot_bits))
> > -			return -EACCES;
> > +	/*
> > +	 * No need to hold encl->lock:
> > +	 * 1. None of the page->* get written.
> > +	 * 2. page->vm_max_prot_bits is set in sgx_encl_page_alloc(). This
> > +	 *    is before calling xa_insert(). After that it is never modified.
> > +	 */
> 
> You forgot to cover racing with insertion, e.g. below is the snippet from my
> original patch[*], which did the lookup without protection from encl->lock.`
> 
> +		/*
> +		 * No need to take encl->lock, vm_prot_bits is set prior to
> +		 * insertion and never changes, and racing with adding pages is
> +		 * a userspace bug.
> +		 */
> +		rcu_read_lock();
> +		page = radix_tree_lookup(&encl->page_tree, idx);
> +		rcu_read_unlock();
> 
> 
> [*]https://patchwork.kernel.org/patch/11005431/

I'm not sure why that was merged as it was but it probably was not
because of that snippet. It had encl->lock before, so it was by all
practical means covered then. I would have replaced encl->lock with that
if I ever had received a patch with just that, i.e. that particular
snippet gone to noise. That's why it was not broken before v36.

/Jarkko



[Index of Archives]     [AMD Graphics]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux