On Thu, Oct 29, 2020 at 8:14 AM Roman Gushchin <guro@xxxxxx> wrote: > > On Wed, Oct 28, 2020 at 10:56:20AM +0800, Muchun Song wrote: > > On Wed, Oct 28, 2020 at 2:48 AM Roman Gushchin <guro@xxxxxx> wrote: > > > > > > On Tue, Oct 27, 2020 at 04:02:55PM +0800, Muchun Song wrote: > > > > If we reparent the slab objects to the root memcg, when we free > > > > the slab object, we need to update the per-memcg vmstats to keep > > > > it correct for the root memcg. Now this at least affects the vmstat > > > > of NR_KERNEL_STACK_KB for !CONFIG_VMAP_STACK when the thread stack > > > > size is smaller than the PAGE_SIZE. > > > > > > > > Fixes: ec9f02384f60 ("mm: workingset: fix vmstat counters for shadow nodes") > > > > Signed-off-by: Muchun Song <songmuchun@xxxxxxxxxxxxx> > > > > > > Can you, please, drop this patch for now? > > > > > > I'm working on a bigger cleanup related to the handling of the root memory > > > cgroup (I sent a link earlier in this thread), which already does a similar change. > > > There are several issues like this one, so it will be nice to fix them all at once. > > > > I have read the patch of https://lkml.org/lkml/2020/10/14/869. You > > mean this patch > > fixes this issue? It chooses to uncharge the root memcg. But here we may need to > > uncharge the root memcg to keep root vmstats correct. If we do not do > > this, we can > > see the wrong vmstats via root memory.stat(e.g. NR_KERNEL_STACK_KB). > > I pointed at a different patch in the same thread (it looks like you read the first one): > https://lkml.org/lkml/2020/10/21/612 Got it. Thanks. That is fine to me. > > It contained the following part: > > @@ -868,7 +860,7 @@ void __mod_lruvec_slab_state(void *p, enum node_stat_item idx, int val) > memcg = mem_cgroup_from_obj(p); > > /* Untracked pages have no memcg, no lruvec. Update only the node */ > - if (!memcg || memcg == root_mem_cgroup) { > + if (!memcg) { > __mod_node_page_state(pgdat, idx, val); > } else { > lruvec = mem_cgroup_lruvec(memcg, pgdat); > > So it's exactly what your patch does. > > Thanks! -- Yours, Muchun