[failures] swap_state-update-shadow_nodes-for-anonymous-page.patch removed from -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The quilt patch titled
     Subject: swap_state: update shadow_nodes for anonymous page
has been removed from the -mm tree.  Its filename was
     swap_state-update-shadow_nodes-for-anonymous-page.patch

This patch was dropped because it had testing failures

------------------------------------------------------
From: Yang Yang <yang.yang29@xxxxxxxxxx>
Subject: swap_state: update shadow_nodes for anonymous page
Date: Fri, 13 Jan 2023 17:36:45 +0800 (CST)

Shadow_nodes is for handling reclaiming of shadow nodes working set.  It
is updated during page cache addition and or deletion.  For a long time
workingset only supported page cache.  But when workingset was changed to
support anonymous page detection, we failed to update shadow nodes for it.
This meant that shadow nodes holding anonymous page will never be
reclaimd by scan_shadow_nodes() even they use much memory and system
memory is tense.

So update shadow_nodes of anonymous page when swap cache is added or
deleted by calling xas_set_update(..workingset_update_node).

Link: https://lkml.kernel.org/r/202301131736452546903@xxxxxxxxxx
Fixes: aae466b0052e ("mm/swap: implement workingset detection for anonymous LRU")
Signed-off-by: Yang Yang <yang.yang29@xxxxxxxxxx>
Reviewed-by: Ran Xiaokai <ran.xiaokai@xxxxxxxxxx>
Cc: Bagas Sanjaya <bagasdotme@xxxxxxxxx>
Cc: Johannes Weiner <hannes@xxxxxxxxxxx>
Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx>
Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---


--- a/include/linux/xarray.h~swap_state-update-shadow_nodes-for-anonymous-page
+++ a/include/linux/xarray.h
@@ -1643,7 +1643,8 @@ static inline void xas_set_order(struct
  * @update: Function to call when updating a node.
  *
  * The XArray can notify a caller after it has updated an xa_node.
- * This is advanced functionality and is only needed by the page cache.
+ * This is advanced functionality and is only needed by the page cache
+ * and swap cache.
  */
 static inline void xas_set_update(struct xa_state *xas, xa_update_node_t update)
 {
--- a/mm/swap_state.c~swap_state-update-shadow_nodes-for-anonymous-page
+++ a/mm/swap_state.c
@@ -94,6 +94,8 @@ int add_to_swap_cache(struct folio *foli
 	unsigned long i, nr = folio_nr_pages(folio);
 	void *old;
 
+	xas_set_update(&xas, workingset_update_node);
+
 	VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
 	VM_BUG_ON_FOLIO(folio_test_swapcache(folio), folio);
 	VM_BUG_ON_FOLIO(!folio_test_swapbacked(folio), folio);
@@ -145,6 +147,8 @@ void __delete_from_swap_cache(struct fol
 	pgoff_t idx = swp_offset(entry);
 	XA_STATE(xas, &address_space->i_pages, idx);
 
+	xas_set_update(&xas, workingset_update_node);
+
 	VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
 	VM_BUG_ON_FOLIO(!folio_test_swapcache(folio), folio);
 	VM_BUG_ON_FOLIO(folio_test_writeback(folio), folio);
@@ -252,6 +256,8 @@ void clear_shadow_from_swap_cache(int ty
 		struct address_space *address_space = swap_address_space(entry);
 		XA_STATE(xas, &address_space->i_pages, curr);
 
+		xas_set_update(&xas, workingset_update_node);
+
 		xa_lock_irq(&address_space->i_pages);
 		xas_for_each(&xas, old, end) {
 			if (!xa_is_value(old))
_

Patches currently in -mm which might be from yang.yang29@xxxxxxxxxx are





[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux