I've noticed that the "slab" value in memory.stat is sometimes 0, even if some children memory cgroups have a non-zero "slab" value. The following investigation showed that this is the result of the kmem_cache reparenting in combination with the per-cpu batching of slab vmstats. At the offlining some vmstat value may leave in the percpu cache, not being propagated upwards by the cgroup hierarchy. It means that stats on ancestor levels are lower than actual. Later when slab pages are released, the precise number of pages is substracted on the parent level, making the value negative. We don't show negative values, 0 is printed instead. To fix this issue, let's flush percpu slab memcg and lruvec stats on memcg offlining. This guarantees that numbers on all ancestor levels are accurate and match the actual number of outstanding slab pages. Fixes: fb2f2b0adb98 ("mm: memcg/slab: reparent memcg kmem_caches on cgroup removal") Signed-off-by: Roman Gushchin <guro@xxxxxx> Cc: Johannes Weiner <hannes@xxxxxxxxxxx> --- mm/memcontrol.c | 51 +++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 51 insertions(+) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 3e821f34399f..3a5f6f486cdf 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3412,6 +3412,50 @@ static int memcg_online_kmem(struct mem_cgroup *memcg) return 0; } +static void memcg_flush_slab_node_stats(struct mem_cgroup *memcg, int node) +{ + struct mem_cgroup_per_node *pn = memcg->nodeinfo[node]; + struct mem_cgroup_per_node *pi; + unsigned long recl = 0, unrecl = 0; + int cpu; + + for_each_possible_cpu(cpu) { + recl += raw_cpu_read( + pn->lruvec_stat_cpu->count[NR_SLAB_RECLAIMABLE]); + unrecl += raw_cpu_read( + pn->lruvec_stat_cpu->count[NR_SLAB_UNRECLAIMABLE]); + } + + for (pi = pn; pi; pi = parent_nodeinfo(pi, node)) { + atomic_long_add(recl, + &pi->lruvec_stat[NR_SLAB_RECLAIMABLE]); + atomic_long_add(unrecl, + &pi->lruvec_stat[NR_SLAB_UNRECLAIMABLE]); + } +} + +static void memcg_flush_slab_vmstats(struct mem_cgroup *memcg) +{ + struct mem_cgroup *mi; + unsigned long recl = 0, unrecl = 0; + int node, cpu; + + for_each_possible_cpu(cpu) { + recl += raw_cpu_read( + memcg->vmstats_percpu->stat[NR_SLAB_RECLAIMABLE]); + unrecl += raw_cpu_read( + memcg->vmstats_percpu->stat[NR_SLAB_UNRECLAIMABLE]); + } + + for (mi = memcg; mi; mi = parent_mem_cgroup(mi)) { + atomic_long_add(recl, &mi->vmstats[NR_SLAB_RECLAIMABLE]); + atomic_long_add(unrecl, &mi->vmstats[NR_SLAB_UNRECLAIMABLE]); + } + + for_each_node(node) + memcg_flush_slab_node_stats(memcg, node); +} + static void memcg_offline_kmem(struct mem_cgroup *memcg) { struct cgroup_subsys_state *css; @@ -3432,7 +3476,14 @@ static void memcg_offline_kmem(struct mem_cgroup *memcg) if (!parent) parent = root_mem_cgroup; + /* + * Deactivate and reparent kmem_caches. Then Flush percpu + * slab statistics to have precise values at the parent and + * all ancestor levels. It's required to keep slab stats + * accurate after the reparenting of kmem_caches. + */ memcg_deactivate_kmem_caches(memcg, parent); + memcg_flush_slab_vmstats(memcg); kmemcg_id = memcg->kmemcg_id; BUG_ON(kmemcg_id < 0); -- 2.21.0