On 5/24/19 12:15 PM, Hillf Danton wrote:
On Thu, 23 May 2019 10:27:37 +0800 Yang Shi wrote:
The commit 9092c71bb724 ("mm: use sc->priority for slab shrink targets")
has broken up the relationship between sc->nr_scanned and slab pressure.
The sc->nr_scanned can't double slab pressure anymore. So, it sounds no
sense to still keep sc->nr_scanned inc'ed. Actually, it would prevent
from adding pressure on slab shrink since excessive sc->nr_scanned would
prevent from scan->priority raise.
The deleted code below wants to get more slab pages shrinked, and it can do
that without raising scan priority first even after commit 9092c71bb724. Or
we may face the risk that priority goes up too much faster than thought, per
the following snippet.
The priority is raised if kswapd_shrink_node() returns false for kswapd
(The direct reclaim would just raise the priority if sc->nr_reclaimed >=
sc->nr_to_reclaim). The kswapd_shrink_node() returns "return
sc->nr_scanned >= sc->nr_to_reclaim". So, the old "double pressure"
doesn't work as it was designed anymore since it would prevent from make
"sc->nr_scanned < sc->nr_to_reclaim".
And, the patch 2/2 would not make the priority go up too much since one
THP would be accounted as 512 base page.
/*
* If we're getting trouble reclaiming, start doing
* writepage even in laptop mode.
*/
if (sc->priority < DEF_PRIORITY - 2)
The bonnie test doesn't show this would change the behavior of
slab shrinkers.
w/ w/o
/sec %CP /sec %CP
Sequential delete: 3960.6 94.6 3997.6 96.2
Random delete: 2518 63.8 2561.6 64.6
The slight increase of "/sec" without the patch would be caused by the
slight increase of CPU usage.
Cc: Josef Bacik <josef@xxxxxxxxxxxxxx>
Cc: Michal Hocko <mhocko@xxxxxxxxxx>
Acked-by: Johannes Weiner <hannes@xxxxxxxxxxx>
Signed-off-by: Yang Shi <yang.shi@xxxxxxxxxxxxxxxxx>
---
v4: Added Johannes's ack
mm/vmscan.c | 5 -----
1 file changed, 5 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 7acd0af..b65bc50 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1137,11 +1137,6 @@ static unsigned long shrink_page_list(struct list_head *page_list,
if (!sc->may_unmap && page_mapped(page))
goto keep_locked;
- /* Double the slab pressure for mapped and swapcache pages */
- if ((page_mapped(page) || PageSwapCache(page)) &&
- !(PageAnon(page) && !PageSwapBacked(page)))
- sc->nr_scanned++;
-
may_enter_fs = (sc->gfp_mask & __GFP_FS) ||
(PageSwapCache(page) && (sc->gfp_mask & __GFP_IO));
--
1.8.3.1
Best Regards
Hillf