On Wed May 15, 2024 at 4:12 PM EEST, Dmitrii Kuvaiskii wrote: > Two enclave threads may try to add and remove the same enclave page > simultaneously (e.g., if the SGX runtime supports both lazy allocation > and MADV_DONTNEED semantics). Consider some enclave page added to the > enclave. User space decides to temporarily remove this page (e.g., > emulating the MADV_DONTNEED semantics) on CPU1. At the same time, user > space performs a memory access on the same page on CPU2, which results > in a #PF and ultimately in sgx_vma_fault(). Scenario proceeds as > follows: > > /* > * CPU1: User space performs > * ioctl(SGX_IOC_ENCLAVE_REMOVE_PAGES) > * on enclave page X > */ > sgx_encl_remove_pages() { > > mutex_lock(&encl->lock); > > entry = sgx_encl_load_page(encl); > /* > * verify that page is > * trimmed and accepted > */ > > mutex_unlock(&encl->lock); > > /* > * remove PTE entry; cannot > * be performed under lock > */ > sgx_zap_enclave_ptes(encl); > /* > * Fault on CPU2 on same page X > */ > sgx_vma_fault() { > /* > * PTE entry was removed, but the > * page is still in enclave's xarray > */ > xa_load(&encl->page_array) != NULL -> > /* > * SGX driver thinks that this page > * was swapped out and loads it > */ > mutex_lock(&encl->lock); > /* > * this is effectively a no-op > */ > entry = sgx_encl_load_page_in_vma(); > /* > * add PTE entry > * > * *BUG*: a PTE is installed for a > * page in process of being removed > */ > vmf_insert_pfn(...); > > mutex_unlock(&encl->lock); > return VM_FAULT_NOPAGE; > } > /* > * continue with page removal > */ > mutex_lock(&encl->lock); > > sgx_encl_free_epc_page(epc_page) { > /* > * remove page via EREMOVE > */ > /* > * free EPC page > */ > sgx_free_epc_page(epc_page); > } > > xa_erase(&encl->page_array); > > mutex_unlock(&encl->lock); > } > > Here, CPU1 removed the page. However CPU2 installed the PTE entry on the > same page. This enclave page becomes perpetually inaccessible (until > another SGX_IOC_ENCLAVE_REMOVE_PAGES ioctl). This is because the page is > marked accessible in the PTE entry but is not EAUGed, and any subsequent > access to this page raises a fault: with the kernel believing there to > be a valid VMA, the unlikely error code X86_PF_SGX encountered by code > path do_user_addr_fault() -> access_error() causes the SGX driver's > sgx_vma_fault() to be skipped and user space receives a SIGSEGV instead. > The userspace SIGSEGV handler cannot perform EACCEPT because the page > was not EAUGed. Thus, the user space is stuck with the inaccessible > page. > > Fix this race by forcing the fault handler on CPU2 to back off if the > page is currently being removed (on CPU1). This is achieved by > introducing a new flag SGX_ENCL_PAGE_BEING_REMOVED, which is unset by > default and set only right-before the first mutex_unlock() in > sgx_encl_remove_pages(). Upon loading the page, CPU2 checks whether this > page is being removed, and if yes then CPU2 backs off and waits until > the page is completely removed. After that, any memory access to this > page results in a normal "allocate and EAUG a page on #PF" flow. > > Fixes: 9849bb27152c ("x86/sgx: Support complete page removal") > Cc: stable@xxxxxxxxxxxxxxx > Signed-off-by: Dmitrii Kuvaiskii <dmitrii.kuvaiskii@xxxxxxxxx> > --- > arch/x86/kernel/cpu/sgx/encl.c | 3 ++- > arch/x86/kernel/cpu/sgx/encl.h | 3 +++ > arch/x86/kernel/cpu/sgx/ioctl.c | 1 + > 3 files changed, 6 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c > index 41f14b1a3025..7ccd8b2fce5f 100644 > --- a/arch/x86/kernel/cpu/sgx/encl.c > +++ b/arch/x86/kernel/cpu/sgx/encl.c > @@ -257,7 +257,8 @@ static struct sgx_encl_page *__sgx_encl_load_page(struct sgx_encl *encl, > > /* Entry successfully located. */ > if (entry->epc_page) { > - if (entry->desc & SGX_ENCL_PAGE_BEING_RECLAIMED) > + if (entry->desc & (SGX_ENCL_PAGE_BEING_RECLAIMED | > + SGX_ENCL_PAGE_BEING_REMOVED)) > return ERR_PTR(-EBUSY); > > return entry; > diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h > index f94ff14c9486..fff5f2293ae7 100644 > --- a/arch/x86/kernel/cpu/sgx/encl.h > +++ b/arch/x86/kernel/cpu/sgx/encl.h > @@ -25,6 +25,9 @@ > /* 'desc' bit marking that the page is being reclaimed. */ > #define SGX_ENCL_PAGE_BEING_RECLAIMED BIT(3) > > +/* 'desc' bit marking that the page is being removed. */ > +#define SGX_ENCL_PAGE_BEING_REMOVED BIT(2) > + > struct sgx_encl_page { > unsigned long desc; > unsigned long vm_max_prot_bits:8; > diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c > index b65ab214bdf5..c542d4dd3e64 100644 > --- a/arch/x86/kernel/cpu/sgx/ioctl.c > +++ b/arch/x86/kernel/cpu/sgx/ioctl.c > @@ -1142,6 +1142,7 @@ static long sgx_encl_remove_pages(struct sgx_encl *encl, > * Do not keep encl->lock because of dependency on > * mmap_lock acquired in sgx_zap_enclave_ptes(). > */ > + entry->desc |= SGX_ENCL_PAGE_BEING_REMOVED; > mutex_unlock(&encl->lock); > > sgx_zap_enclave_ptes(encl, addr); Makes perfect sense: Reviewed-by: Jarkko Sakkinen <jarkko@xxxxxxxxxx> BR, Jarkko