On Mon, Oct 05, 2020 at 12:11:39PM +0100, Matthew Wilcox wrote: > On Mon, Oct 05, 2020 at 06:17:59AM +0300, Jarkko Sakkinen wrote: > > @@ -317,10 +318,31 @@ int sgx_encl_may_map(struct sgx_encl *encl, unsigned long start, > > if (current->personality & READ_IMPLIES_EXEC) > > return -EACCES; > > > > - xas_for_each(&xas, page, idx_end) > > + /* > > + * No need to hold encl->lock: > > + * 1. None of the page->* get written. > > + * 2. page->vm_max_prot_bits is set in sgx_encl_page_alloc(). This > > + * is before calling xa_insert(). After that it is never modified. > > + */ > > + xas_lock(&xas); > > + xas_for_each(&xas, page, idx_end) { > > + if (++count % XA_CHECK_SCHED) > > + continue; > > This really doesn't do what you think it does. > > int ret = 0; > int count = 0; > > xas_lock(&xas); > while (xas.index < idx_end) { > struct sgx_page *page = xas_next(&xas); > > if (!page || (~page->vm_max_prot_bits & vm_prot_bits)) { > ret = -EACCESS; > break; > } > > if (++count % XA_CHECK_SCHED) > continue; > xas_pause(&xas); > xas_unlock(&xas); > cond_resched(); > xas_lock(&xas); > } > xas_unlock(&xas); > > return ret; No mine certainly does not, it locks up the system if the loop succeeds (i.e. does not return -EACCESS) :-) Unfortunately had by mistake the v1 patch (xa_load()) in the kernel that I used to test. /Jarkko