On Mon, Oct 05, 2020 at 05:11:19PM +0300, Jarkko Sakkinen wrote: > @@ -317,11 +319,30 @@ int sgx_encl_may_map(struct sgx_encl *encl, unsigned long start, > if (current->personality & READ_IMPLIES_EXEC) > return -EACCES; > > - xas_for_each(&xas, page, idx_end) > - if (!page || (~page->vm_max_prot_bits & vm_prot_bits)) > - return -EACCES; > + /* > + * No need to hold encl->lock: > + * 1. None of the page->* get written. > + * 2. page->vm_max_prot_bits is set in sgx_encl_page_alloc(). This > + * is before calling xa_insert(). After that it is never modified. > + */ You forgot to cover racing with insertion, e.g. below is the snippet from my original patch[*], which did the lookup without protection from encl->lock.` + /* + * No need to take encl->lock, vm_prot_bits is set prior to + * insertion and never changes, and racing with adding pages is + * a userspace bug. + */ + rcu_read_lock(); + page = radix_tree_lookup(&encl->page_tree, idx); + rcu_read_unlock(); [*]https://patchwork.kernel.org/patch/11005431/