On 5/9/22 14:47, Reinette Chatre wrote: > --- a/arch/x86/kernel/cpu/sgx/main.c > +++ b/arch/x86/kernel/cpu/sgx/main.c > @@ -190,6 +190,8 @@ static int __sgx_encl_ewb(struct sgx_epc_page *epc_page, void *va_slot, > pginfo.metadata = (unsigned long)kmap_atomic(backing->pcmd) + > backing->pcmd_offset; > > + set_page_dirty(backing->pcmd); > + set_page_dirty(backing->contents); > ret = __ewb(&pginfo, sgx_get_epc_virt_addr(epc_page), va_slot); I think I may have led you astray on this. It's typical to do: lock_page(page) set_page_dirty(backing->contents); // modify page unlock_page(page) That's safe because laundering the page requires locking it. But, the page lock isn't held in this case. So, it's possible to do: get_page(page) // page can be truncated, but not reclaimed set_page_dirty(page) // race: something launders page here __ewb(page) put_page(page) // page is reclaimed, __ewb() contents were thrown away __shmem_rw() has a similar pattern to what __sgx_encl_ewb() uses. It doesn't hold a lock_page(), but does dirty the page *after* its memset() writes to the page. In other words, I think the page dirtying order in the previous (before this patch) SGX code was correct. The concept of this patch is correct, but let's just change the dirtying order, please. Could you also please add a cc: stable@ and a Link: to either this message or the original message where I suggested this, like this (temporary) commit has: https://git.kernel.org/pub/scm/linux/kernel/git/daveh/devel.git/commit/?h=testme&id=5ee32d5d3f3cdb943b01992d2ffc5093b139d023