On 5/4/22 15:13, Reinette Chatre wrote: > + /* > + * VA page slot ID uses same bit as the flag so it is important > + * to ensure that the page is not already in backing store. > + */ > + if (entry->epc_page && > + (entry->desc & SGX_ENCL_PAGE_BEING_RECLAIMED)) { > + pr_debug("%s:%d encl %px addr 0x%lx desc 0x%lx (index %lu) being reclaimed\n", > + __func__, __LINE__, encl, addr, entry->desc, > + PFN_DOWN(entry->desc - entry->encl->base)); > + reclaimed = 1; > + break; > + } I don't think this works as-is. As it stands, SGX_ENCL_PAGE_BEING_RECLAIMED is cleared *BEFORE* the EWB. It's possible for that check above to race with SGX_ENCL_PAGE_BEING_RECLAIMED being cleared. That would erroneously lead pcmd_page_in_use() to return false *just* before data is written to the PCMD page. static void sgx_encl_ewb() { encl_page->desc &= ~SGX_ENCL_PAGE_BEING_RECLAIMED; ... __sgx_encl_ewb() } You might be able to move the ~SGX_ENCL_PAGE_BEING_RECLAIMED until after the __sgx_encl_ewb(). But, at that point, the pcmd_page_empty is a bit useless because it's stale. I guess it might act as a performance optimization because it allows avoiding the expensive pcmd_page_in_use() check. Also, one nit: > + /* > + * Address of enclave page using the first entry within the > + * PCMD page. > + */ > + pcmd_first_page = PFN_PHYS(page_index & ~0x1FUL) + encl->base; That ~0x1FUL is pretty magic. I assume that's 5 bits because there are 32 PCMDs per page. This could probably be something resembling this instead: #define PCMDS_PER_PAGE (PAGE_SIZE/sizeof(struct sgx_pcmd)) #define PCMD_FIRST_MASK GENMASK(PCMDS_PER_PAGE, 0)