+ mm-workingset-ignore-slab-memory-size-when-calculating-shadows-pressure.patch added to -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     Subject: mm: workingset: ignore slab memory size when calculating shadows pressure
has been added to the -mm tree.  Its filename is
     mm-workingset-ignore-slab-memory-size-when-calculating-shadows-pressure.patch

This patch should soon appear at
    https://ozlabs.org/~akpm/mmots/broken-out/mm-workingset-ignore-slab-memory-size-when-calculating-shadows-pressure.patch
and later at
    https://ozlabs.org/~akpm/mmotm/broken-out/mm-workingset-ignore-slab-memory-size-when-calculating-shadows-pressure.patch

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***

The -mm tree is included into linux-next and is updated
there every 3-4 working days

------------------------------------------------------
From: Roman Gushchin <guro@xxxxxx>
Subject: mm: workingset: ignore slab memory size when calculating shadows pressure

In the memcg case count_shadow_nodes() sums the number of pages in lru
lists and the amount of slab memory (reclaimable and non-reclaimable) as a
baseline for the allowed number of shadow entries.

It seems to be a good analogy for the !memcg case, where
node_present_pages() is used.  However, it's not quite true, as there two
problems:

1) Due to slab reparenting introduced by commit fb2f2b0adb98 ("mm:
   memcg/slab: reparent memcg kmem_caches on cgroup removal") local
   per-lruvec slab counters might be inaccurate on non-leaf levels.  It's
   the only place where local slab counters are used.

2) Shadow nodes by themselves are backed by slabs.  So there is a loop
   dependency: the more shadow entries are there, the less pressure the
   kernel applies to reclaim them.

Fortunately, there is a simple way to solve both problems: slab counters
shouldn't be taken into the account by count_shadow_nodes().

Link: https://lkml.kernel.org/r/20200903230055.1245058-1-guro@xxxxxx
Signed-off-by: Roman Gushchin <guro@xxxxxx>
Cc: Johannes Weiner <hannes@xxxxxxxxxxx>
Cc: Michal Hocko <mhocko@xxxxxxxx>
Cc: Shakeel Butt <shakeelb@xxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 mm/workingset.c |    4 ----
 1 file changed, 4 deletions(-)

--- a/mm/workingset.c~mm-workingset-ignore-slab-memory-size-when-calculating-shadows-pressure
+++ a/mm/workingset.c
@@ -495,10 +495,6 @@ static unsigned long count_shadow_nodes(
 		for (pages = 0, i = 0; i < NR_LRU_LISTS; i++)
 			pages += lruvec_page_state_local(lruvec,
 							 NR_LRU_BASE + i);
-		pages += lruvec_page_state_local(
-			lruvec, NR_SLAB_RECLAIMABLE_B) >> PAGE_SHIFT;
-		pages += lruvec_page_state_local(
-			lruvec, NR_SLAB_UNRECLAIMABLE_B) >> PAGE_SHIFT;
 	} else
 #endif
 		pages = node_present_pages(sc->nid);
_

Patches currently in -mm which might be from guro@xxxxxx are

mm-workingset-ignore-slab-memory-size-when-calculating-shadows-pressure.patch
mm-vmstat-fix-proc-sys-vm-stat_refresh-generating-false-warnings.patch
mm-vmstat-fix-proc-sys-vm-stat_refresh-generating-false-warnings-fix.patch
mm-rework-remote-memcg-charging-api-to-support-nesting.patch
mm-kmem-move-memcg_kmem_bypass-calls-to-get_mem-obj_cgroup_from_current.patch
mm-kmem-remove-redundant-checks-from-get_obj_cgroup_from_current.patch
mm-kmem-prepare-remote-memcg-charging-infra-for-interrupt-contexts.patch
mm-kmem-enable-kernel-memcg-accounting-from-interrupt-contexts.patch




[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux