On Tue, Mar 08, 2022 at 08:18:24PM -0500, Waiman Long wrote: > Since commit 2c80cd57c743 ("mm/list_lru.c: fix list_lru_count_node() > to be race free"), we are tracking the total number of lru > entries in a list_lru_node in its nr_items field. In the case of > memcg_reparent_list_lru_node(), there is nothing to be done if nr_items > is 0. We don't even need to take the nlru->lock as no new lru entry > could be added by a racing list_lru_add() to the draining src_idx memcg > at this point. > > Signed-off-by: Waiman Long <longman@xxxxxxxxxx> > --- > mm/list_lru.c | 6 ++++++ > 1 file changed, 6 insertions(+) > > diff --git a/mm/list_lru.c b/mm/list_lru.c > index ba76428ceece..c669d87001a6 100644 > --- a/mm/list_lru.c > +++ b/mm/list_lru.c > @@ -394,6 +394,12 @@ static void memcg_reparent_list_lru_node(struct list_lru *lru, int nid, > int dst_idx = dst_memcg->kmemcg_id; > struct list_lru_one *src, *dst; > > + /* > + * If there is no lru entry in this nlru, we can skip it immediately. > + */ > + if (!READ_ONCE(nlru->nr_items)) > + return; This is a per-node counter, not a per-memcg, right? If so, do we optimize for the case when all lru items belong to one node and others are empty?