On Tue Jan 30, 2024 at 4:09 AM EET, Haitao Huang wrote: > From: Kristen Carlson Accardi <kristen@xxxxxxxxxxxxxxx> > > The functions, sgx_{mark,unmark}_page_reclaimable(), manage the tracking > of reclaimable EPC pages: sgx_mark_page_reclaimable() adds a newly > allocated page into the global LRU list while > sgx_unmark_page_reclaimable() does the opposite. Abstract the hard coded > global LRU references in these functions to make them reusable when > pages are tracked in per-cgroup LRUs. > > Create a helper, sgx_lru_list(), that returns the LRU that tracks a given > EPC page. It simply returns the global LRU now, and will later return > the LRU of the cgroup within which the EPC page was allocated. Replace > the hard coded global LRU with a call to this helper. > > Next patches will first get the cgroup reclamation flow ready while > keeping pages tracked in the global LRU and reclaimed by ksgxd before we > make the switch in the end for sgx_lru_list() to return per-cgroup > LRU. > > Co-developed-by: Sean Christopherson <sean.j.christopherson@xxxxxxxxx> > Signed-off-by: Sean Christopherson <sean.j.christopherson@xxxxxxxxx> > Signed-off-by: Kristen Carlson Accardi <kristen@xxxxxxxxxxxxxxx> > Co-developed-by: Haitao Huang <haitao.huang@xxxxxxxxxxxxxxx> > Signed-off-by: Haitao Huang <haitao.huang@xxxxxxxxxxxxxxx> > --- > V7: > - Split this out from the big patch, #10 in V6. (Dave, Kai) > --- > arch/x86/kernel/cpu/sgx/main.c | 30 ++++++++++++++++++------------ > 1 file changed, 18 insertions(+), 12 deletions(-) > > diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c > index 912959c7ecc9..a131aa985c95 100644 > --- a/arch/x86/kernel/cpu/sgx/main.c > +++ b/arch/x86/kernel/cpu/sgx/main.c > @@ -32,6 +32,11 @@ static DEFINE_XARRAY(sgx_epc_address_space); > */ > static struct sgx_epc_lru_list sgx_global_lru; > > +static inline struct sgx_epc_lru_list *sgx_lru_list(struct sgx_epc_page *epc_page) > +{ > + return &sgx_global_lru; > +} > + > static atomic_long_t sgx_nr_free_pages = ATOMIC_LONG_INIT(0); > > /* Nodes with one or more EPC sections. */ > @@ -500,25 +505,24 @@ struct sgx_epc_page *__sgx_alloc_epc_page(void) > } > > /** > - * sgx_mark_page_reclaimable() - Mark a page as reclaimable > + * sgx_mark_page_reclaimable() - Mark a page as reclaimable and track it in a LRU. > * @page: EPC page > - * > - * Mark a page as reclaimable and add it to the active page list. Pages > - * are automatically removed from the active list when freed. > */ > void sgx_mark_page_reclaimable(struct sgx_epc_page *page) > { > - spin_lock(&sgx_global_lru.lock); > + struct sgx_epc_lru_list *lru = sgx_lru_list(page); > + > + spin_lock(&lru->lock); > page->flags |= SGX_EPC_PAGE_RECLAIMER_TRACKED; > - list_add_tail(&page->list, &sgx_global_lru.reclaimable); > - spin_unlock(&sgx_global_lru.lock); > + list_add_tail(&page->list, &lru->reclaimable); > + spin_unlock(&lru->lock); > } > > /** > - * sgx_unmark_page_reclaimable() - Remove a page from the reclaim list > + * sgx_unmark_page_reclaimable() - Remove a page from its tracking LRU > * @page: EPC page > * > - * Clear the reclaimable flag and remove the page from the active page list. > + * Clear the reclaimable flag if set and remove the page from its LRU. > * > * Return: > * 0 on success, > @@ -526,18 +530,20 @@ void sgx_mark_page_reclaimable(struct sgx_epc_page *page) > */ > int sgx_unmark_page_reclaimable(struct sgx_epc_page *page) > { > - spin_lock(&sgx_global_lru.lock); > + struct sgx_epc_lru_list *lru = sgx_lru_list(page); > + > + spin_lock(&lru->lock); > if (page->flags & SGX_EPC_PAGE_RECLAIMER_TRACKED) { > /* The page is being reclaimed. */ > if (list_empty(&page->list)) { > - spin_unlock(&sgx_global_lru.lock); > + spin_unlock(&lru->lock); > return -EBUSY; > } > > list_del(&page->list); > page->flags &= ~SGX_EPC_PAGE_RECLAIMER_TRACKED; > } > - spin_unlock(&sgx_global_lru.lock); > + spin_unlock(&lru->lock); > > return 0; > } Reviewed-by: Jarkko Sakkinen <jarkko@xxxxxxxxxx> BR, Jarkko