On a per node basis, the mem cgroup soft limit tree on each node tracks how much a cgroup has exceeded its soft limit memory limit and sorts the cgroup by its excess usage. On page release, the trees are not updated right away, until we have gathered a batch of pages belonging to the same cgroup. This reduces the frequency of updating the soft limit tree and locking of the tree and associated cgroup. However, the batch of pages could contain pages from multiple nodes but only the soft limit tree from one node would get updated. Change the logic so that we update the tree in batch of pages, with each batch of pages all in the same mem cgroup and memory node. An update is issued for the batch of pages of a node collected till now whenever we encounter a page belonging to a different node. Reviewed-by: Ying Huang <ying.huang@xxxxxxxxx> Signed-off-by: Tim Chen <tim.c.chen@xxxxxxxxxxxxxxx> --- mm/memcontrol.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index d72449eeb85a..f5a4a0e4e2ec 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6804,6 +6804,7 @@ struct uncharge_gather { unsigned long pgpgout; unsigned long nr_kmem; struct page *dummy_page; + int nid; }; static inline void uncharge_gather_clear(struct uncharge_gather *ug) @@ -6849,7 +6850,9 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug) * exclusive access to the page. */ - if (ug->memcg != page_memcg(page)) { + if (ug->memcg != page_memcg(page) || + /* uncharge batch update soft limit tree on a node basis */ + (ug->dummy_page && ug->nid != page_to_nid(page))) { if (ug->memcg) { uncharge_batch(ug); uncharge_gather_clear(ug); @@ -6869,6 +6872,7 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug) ug->pgpgout++; ug->dummy_page = page; + ug->nid = page_to_nid(page); page->memcg_data = 0; css_put(&ug->memcg->css); } -- 2.20.1