On Thu, 5 Sep 2019 15:10:34 +0800 Honglei Wang <honglei.wang@xxxxxxxxxx> wrote: > lruvec_lru_size() is involving lruvec_page_state_local() to get the > lru_size in the current code. It's base on lruvec_stat_local.count[] > of mem_cgroup_per_node. This counter is updated in batch. It won't > do charge if the number of coming pages doesn't meet the needs of > MEMCG_CHARGE_BATCH who's defined as 32 now. > > The testcase in LTP madvise09[1] fails due to small block memory is > not charged. It creates a new memcgroup and sets up 32 MADV_FREE > pages. Then it forks child who will introduce memory pressure in the > memcgroup. The MADV_FREE pages are expected to be released under the > pressure, but 32 is not more than MEMCG_CHARGE_BATCH and these pages > won't be charged in lruvec_stat_local.count[] until some more pages > come in to satisfy the needs of batch charging. So these MADV_FREE > pages can't be freed in memory pressure which is a bit conflicted > with the definition of MADV_FREE. > > Getting lru_size base on lru_zone_size of mem_cgroup_per_node which > is not updated in batch can make it a bit more accurate in similar > scenario. I redid the changelog somewhat: : lruvec_lru_size() is invokving lruvec_page_state_local() to get the : lru_size. It's base on lruvec_stat_local.count[] of mem_cgroup_per_node. : This counter is updated in a batched way. It won't be charged if the : number of incoming pages doesn't meet the needs of MEMCG_CHARGE_BATCH : which is defined as 32. : : The testcase in LTP madvise09[1] fails because small blocks of memory are : not charged. It creates a new memcgroup and sets up 32 MADV_FREE pages. : Then it forks a child who will introduce memory pressure in the memcgroup. : The MADV_FREE pages are expected to be released under the pressure, but : 32 is not more than MEMCG_CHARGE_BATCH and these pages won't be charged in : lruvec_stat_local.count[] until some more pages come in to satisfy the : needs of batch charging. So these MADV_FREE pages can't be freed in : memory pressure which is a bit conflicted with the definition of : MADV_FREE. : : Get the lru_size base on lru_zone_size of mem_cgroup_per_node which is not : updated via batching can making it more accurate in this scenario. : : This is effectively a partial reversion of 1a61ab8038e72 ("mm: memcontrol: : replace zone summing with lruvec_page_state()"). : : [1] https://github.com/linux-test-project/ltp/blob/master/testcases/kernel/syscalls/madvise/madvise09.c > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -354,12 +354,13 @@ unsigned long zone_reclaimable_pages(struct zone *zone) > */ > unsigned long lruvec_lru_size(struct lruvec *lruvec, enum lru_list lru, int zone_idx) > { > - unsigned long lru_size; > + unsigned long lru_size = 0; > int zid; > > - if (!mem_cgroup_disabled()) > - lru_size = lruvec_page_state_local(lruvec, NR_LRU_BASE + lru); > - else > + if (!mem_cgroup_disabled()) { > + for (zid = 0; zid < MAX_NR_ZONES; zid++) > + lru_size += mem_cgroup_get_zone_lru_size(lruvec, lru, zid); > + } else > lru_size = node_page_state(lruvec_pgdat(lruvec), NR_LRU_BASE + lru); > > for (zid = zone_idx + 1; zid < MAX_NR_ZONES; zid++) { Do we think this problem is serious enough to warrant backporting into earlier kernels?