Increase `section->free_cnt` in sgx_sanitize_section() is more reasonable, which is called in ksgxd kernel thread, instead of assigning it to epc section pages number at initialization. Although this is unlikely to fail, these pages cannot be allocated after initialization, and which need to be reset by ksgxd. At the same time, taking section->lock could be moved inside the !ret flow so that EREMOVE is done without holding the lock. it's theoretically possible that ksgxd hasn't finished sanitizing the EPC when userspace starts creating enclaves. Reported-by: Jia Zhang <zhang.jia@xxxxxxxxxxxxxxxxx> Suggested-by: Sean Christopherson <seanjc@xxxxxxxxxx> Reviewed-by: Sean Christopherson <seanjc@xxxxxxxxxx> Signed-off-by: Tianjia Zhang <tianjia.zhang@xxxxxxxxxxxxxxxxx> --- arch/x86/kernel/cpu/sgx/main.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index c519fc5f6948..34a72a147983 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -41,16 +41,18 @@ static void sgx_sanitize_section(struct sgx_epc_section *section) if (kthread_should_stop()) return; - /* needed for access to ->page_list: */ - spin_lock(§ion->lock); - page = list_first_entry(§ion->init_laundry_list, struct sgx_epc_page, list); ret = __eremove(sgx_get_epc_virt_addr(page)); - if (!ret) + + /* needed for access to ->page_list: */ + spin_lock(§ion->lock); + + if (!ret) { list_move(&page->list, §ion->page_list); - else + section->free_cnt += 1; + } else list_move_tail(&page->list, &dirty); spin_unlock(§ion->lock); @@ -646,7 +648,6 @@ static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size, list_add_tail(§ion->pages[i].list, §ion->init_laundry_list); } - section->free_cnt = nr_pages; return true; } -- 2.19.1.3.ge56e4f7