The patch titled Subject: swap_state: update shadow_nodes for anonymous page has been added to the -mm mm-unstable branch. Its filename is swap_state-update-shadow_nodes-for-anonymous-page.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/swap_state-update-shadow_nodes-for-anonymous-page.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Yang Yang <yang.yang29@xxxxxxxxxx> Subject: swap_state: update shadow_nodes for anonymous page Date: Fri, 13 Jan 2023 17:36:45 +0800 (CST) Shadow_nodes is for handling reclaiming of shadow nodes working set. It is updated during page cache addition and or deletion. For a long time workingset only supported page cache. But when workingset was changed to support anonymous page detection, we failed to update shadow nodes for it. This meant that shadow nodes holding anonymous page will never be reclaimd by scan_shadow_nodes() even they use much memory and system memory is tense. So update shadow_nodes of anonymous page when swap cache is added or deleted by calling xas_set_update(..workingset_update_node). Link: https://lkml.kernel.org/r/202301131736452546903@xxxxxxxxxx Fixes: aae466b0052e ("mm/swap: implement workingset detection for anonymous LRU") Signed-off-by: Yang Yang <yang.yang29@xxxxxxxxxx> Reviewed-by: Ran Xiaokai <ran.xiaokai@xxxxxxxxxx> Cc: Bagas Sanjaya <bagasdotme@xxxxxxxxx> Cc: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- --- a/include/linux/xarray.h~swap_state-update-shadow_nodes-for-anonymous-page +++ a/include/linux/xarray.h @@ -1643,7 +1643,8 @@ static inline void xas_set_order(struct * @update: Function to call when updating a node. * * The XArray can notify a caller after it has updated an xa_node. - * This is advanced functionality and is only needed by the page cache. + * This is advanced functionality and is only needed by the page cache + * and swap cache. */ static inline void xas_set_update(struct xa_state *xas, xa_update_node_t update) { --- a/mm/swap_state.c~swap_state-update-shadow_nodes-for-anonymous-page +++ a/mm/swap_state.c @@ -94,6 +94,8 @@ int add_to_swap_cache(struct folio *foli unsigned long i, nr = folio_nr_pages(folio); void *old; + xas_set_update(&xas, workingset_update_node); + VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); VM_BUG_ON_FOLIO(folio_test_swapcache(folio), folio); VM_BUG_ON_FOLIO(!folio_test_swapbacked(folio), folio); @@ -145,6 +147,8 @@ void __delete_from_swap_cache(struct fol pgoff_t idx = swp_offset(entry); XA_STATE(xas, &address_space->i_pages, idx); + xas_set_update(&xas, workingset_update_node); + VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); VM_BUG_ON_FOLIO(!folio_test_swapcache(folio), folio); VM_BUG_ON_FOLIO(folio_test_writeback(folio), folio); @@ -252,6 +256,8 @@ void clear_shadow_from_swap_cache(int ty struct address_space *address_space = swap_address_space(entry); XA_STATE(xas, &address_space->i_pages, curr); + xas_set_update(&xas, workingset_update_node); + xa_lock_irq(&address_space->i_pages); xas_for_each(&xas, old, end) { if (!xa_is_value(old)) _ Patches currently in -mm which might be from yang.yang29@xxxxxxxxxx are swap_state-update-shadow_nodes-for-anonymous-page.patch